Warning, propeller-head Infiniband post.
I read the 4/2000 white paper at the Infiniband site, infinibandta.com (catch that PDF name!). My interpretation:
IB is intended to solve modern server I/O bottlenecks caused by the current industry standard PCI bus, by implementing intelligent I/O channels like IBM has been using on mainframes for decades. Advances in silicon now allow this to be done economically for midrange and smaller systems. A ubiquitous standard I/O model for all systems would have major advantages (read "kill the gorillas" <g>) for the industry.
The primary market initially will be for server, storage, and compute farms, where the systems are physically clustered together and must work in tight cooperation. Competition to replace PCI includes existing Compact PCI hardware, and the RapidIO work by Motorola. Gigabit+ Ethernet is an obvious competitor for the inter-device medium.
The IB adapters will be at first cards, then chipsets, and someday a chip (eg ASIC). On the CPU side, the adapter will move data into and out of CPU memory by DMA (fast and efficient block copy, standard technology). Inside the adapter (this is the 'intelligent' part) will be management and configuration applications, and a communications software stack handling up through at least the session level of the OSI network model, and capable of handling many simultaneous sessions. On the external side, the adapters are very analogous to current network adapters.
Data is transferred between adapters in 'messages' analogous to TCP/IP, and device addressing is IPv6 (next generation TCP/IP). Media protocol appears to be synchronous (a la ATM as opposed to Ethernet). Media distance limits are 17 meters (data center size) for copper wiring or 100+ meters for optical fiber (inter-building distance). Speed is up to 5 gigaBYTES/sec using a 16-fiber bundle.
The intelligence in the adapter will, for example, allow for remote DMA (eg a remote printer can 'directly' access the server's CPU memory, and vice versa), and can support multicasting, where eg one server sends one message once to a number of other specific servers. Note that a server's CPU can perform I/O by simply telling the adapter to open a session to another device, writing data to main memory for pickup by the adapter, and reading data from main memory, placed there by the adapter. No further CPU I/O processing involved. Thus the PCI bus bottleneck is relieved.
The white paper was fuzzy, but it appears there will be at least two generations of IB, the first simplified and lacking switching (point-to-point, like a PC modem's PPP mode of TCP/IP), and the second a full-blown local network with intelligent switches routing messages by IPv6 address. That second generation is the touted 'switched matrix' from eg the CNET articles.
................................
Comments
Have no idea how anyone (eg Emulux, CRDS) is announcing IB products at this stage, the first generation spec wasn't expected until this summer (don't know if it's out yet), and the first devices not until winter 2001.
The full network implementation of IB is certainly intended as a full replacement for Fibre Channel, as well as a variety of current proprietary inter-server and intra-server busses. All of the existing proprietary complexity of SANs would go away. Not familiar enough with NetApp's VI to comment there.
The paper's statements that the use of IPv6 preps IB for use over the Internet's piping is, IMO, something between extreme vaporware and a perfect vacuum.
Hints were dropped about the conflicts between the consortium members, about near term practical products versus long term visionary designs. May also be some problems there over which partners contribute what IP to the effort.
As white papers go, this one was really weak on details. The white paper spent a lot of time saying IB will be everything to everyone. Hmmm. It also said that the IB world will come to pass because of the strength of the IB consortium members. Double HmmmHmmm.
Note that the paper is from April 2000, so some things may have changed since then.
- Dway |