Thanks for the links GD, I went back and looked these posts up and found them worth reading again.........From this reread am content that FCs manifest destiny is to rule the SAN and when that build out is done they may have even more muscle. This is good news for companies like crossroads-as they get to make more bridges?......
siliconinvestor.com
FIO looking to extend the usefulness of a a PCI bus environment. The compatibility is important for the thousands of small systems with limited I/O requirements.
PCI buses are already holding Data center class systems back. FC now provides almost 100MB/sec of I/O per path. With multiple paths into a 4x Intel quad the FC I/O can provide over 360MB/sec. The PCI based I/O architecture can only handle about 160MB/sec sustained. So there is already a need for improved I/O to the processors.
The performance mentioned is at Gigabit FC rates. Today, deliverable, switched Fabric SAN's with eight switches can provide over 5,000 MB/sec of I/O to a system. In a short time we will see 2 Gigabit FC and the performance of the Fibre Channel Fabrics will double and more switches will be supportable. This could push available I/O per fabric over 20,000 MB/sec (20GB per second). Today a PCI bus can do around 120MB/sec. So you can see that it will be a non trivial engineering task to support these I/O rates with PCI buses.
The Number one requirement is how to get this I/O into the faster and faster processors. If NGIO accomplishes this then it will have served its purpose well for Intel based data-centers.
The concept of moving from a bus to a fabric based I/O for Intel processors is an enabeler for their use in larger and larger data-center uses.
The ability to do clustered systems in a 10KM geographically dispersed SAN is an absolute advantage for Fibre Channel. Another advantage is that this capability available for use today. Any other technology is still on the drawing boards. By the time NGIO or any other solution is production ready in volume, FC will have a huge installed infrastructure that will be hard for another medium to displace.
I can't comment on the SAN and clustering capabilities for NGIO.
Don't know what this means.......
In addition, NGIO representatives countered that they are targeting a low cost point by using a serial bus and existing Gigabit Ethernet and Fibre Channel physical-layer silicon, and the more-radical Future I/O approach will be too costly. The Future I/O source denied that, saying, "We are very sensitive to cost."..............
But NGIO must be closer to FC than the F IO standard......
siliconinvestor.com
KJ's thoughts on NGIO helped me .....an expansion on the answer to Q #14. Thanks KJ.
siliconinvestor.com
IMO NGIO will emphasize on opening up the aggregate throughput from the processor(s) to the peripherals(i.e. a serial architecture which can handle 1Gbytes/sec or more). It will use a link layer based on Fibre channel. Supposedly by 2001, the standard is finalized, companies are signing on, products are being made, NGIO switches are being made. Besides FC will have a huge installed base by that time, what kind of shape and form a NGIO switch would have will be of great interest to many of us like J Fieb for obvious reasons. FC switch can be a natural fit to support also NGIO. However, I do see the advantage of having a pure NGIO only switch if all devices on the fabric only speak NGIO. So far, both factions of NGIO and FIO do not have any storage vendors(i.e. no peripherals manufacturers) as members. Does that mean NGIO is a no brainer for them to implement or they can continue to use FC and only the server companies need to implement this new I/O structure? I'm not sure. It is unclear to me. Do you have any insight on this? But the current emphasis seems to be on re-defining the server I/O shortcomings.
KJ
Looking forward to the day when we can read the details on a major SUNW/ANCR FC SAN installation. |