For general info on I2O try developer.intel.com
or under the URL below under "about I2O" and "technology backgrounder". i2osig.org
For InfiniBand try infinibandta.org or windriver.com When this article was written InfiniBand had not yet formed, it was know as NGIO(Intel) or Future I/O (Compaq+others).
Try Peter Church's excellent WIND index for interesting I2O posts. Message 14219776
Also,here is a good article on InfiniBand I saved on my HD. The original article seems to have disappeared.
Infiniband provides a new vision Jim Pappas, Director of Initiative Marketing, Enterprise Server Group, Intel Corp., Hillsboro, Ore.
New technology standards are created every day. Some, as is the case with PCI and USB, are extremely successful; others are not as well-received.
Will Infiniband architecture follow the likes of PCI and USB, or will it be another example of a technology with too much hype and too little delivery? Based on the need for a common switched fabric for servers, remote storage and networking, Infiniband technology promises to be a strategic inflection point for the computing industry, removing the bottlenecks created by today's shared bus I/O architecture.
As the Internet continues to demand greater performance and reliability from data centers, a new vision of distributed computing is emerging, where servers, remote storage and connections to local and wide-area networks exist as peers in a unified fabric. To deliver this vision, a new interconnect for the Internet data center is needed: Enter Infiniband architecture, a switched fabric interconnect technology primarily designed for connecting servers, remote storage and networking together, providing low latency direct access into host memory and delivering more flexible and effective data centers.
Current system architecture possesses inherent performance and reliability limitations that are centered around the dependence on a shared bus. Shared buses have historically proved to be extremely successful. However, as performance and reliability demands continue to escalate and more applications become I/O bound, systems are reaching a point where the traditional load store transfer of data creates a bottleneck that affects data center performance. The shared bus creates contention between devices accessing memory and limits system performance through I/O interrupts to the processor. Shared buses also create physical restrictions for system design, limiting I/O to the number of slots configured within a server chassis-once the chassis is "full" another server must be added to create more I/O-attach points. With the growth of storage area networks and clustering, and the escalating demand for slots within the chassis, the problem is exacerbated.
With Infiniband architecture, the center of the Internet data center shifts from the server to a switched fabric. Servers, networking and storage all access a common fabric through serial links. All three types of devices can scale independently based on system requirements. It becomes possible to remove the I/O adapters from the server chassis with the flexibility to locate I/O remote from the CPU-memory complex. Infiniband links provide scalable connections between fabric nodes and offer direct paths to server memory, removing the bottleneck of today's load store topology. This direct path eliminates today's I/O interrupts to the processor, increasing system performance. Instead of contending for access to the shared bus, dual simplex serial links running at 2.5 Gbits/second connect directly to memory via a host channel adapter.
The decoupling of I/O from the CPU-memory complex also enables the creation of a unified fabric for clustering, networking and storage communication. The Infiniband fabric allows for peer-to-peer data flow, such as direct communication from the local area network to remote storage devices. The server is removed from the role of arbiter between disparate networks. Storage topologies such as Fibre Channel and SCSI, and communication topologies such as Gigabit Ethernet and ATM connect seamlessly into the "edges" of the fabric through target channel adapters.
Performance within an Infiniband fabric begins with a four-wire 2.5-Gbit/s link. When connections are added to the fabric, the performance of the fabric increases. Aggregate throughput is measured by the bisectional bandwidth of the fabric and is not gated by shared bus limitations. To scale performance, the architecture offers the flexibility to aggregate multiple links-for example, two 2.5-Gbit/s links aggregate to 5-Gbit/s potential throughput. The architecture also supports multiple link widths including x4 and x12 implementations, which offer 10 Gbits/s and 30 Gbits/s, respectively.
A single Infiniband fabric can scale up to 64,000 individual nodes. Additionally, arbitrarily large fabrics can be created by connecting multiple subnets-up to 64,000 nodes each-via Infiniband routers and through IPV6 addressing.
The Internet is creating explosive growth and increasing demand for server density. As Infiniband architecture delivers on the promise of increased server density and modularity, the architecture also creates a more manageable rack interconnect. With the removal of I/O cards from the server chassis there are no longer multiple cables of different types connecting to the server. Instead, these are replaced by Infiniband links that connect the server to an Infiniband switch and onto remote nodes, which are also communicating through the switched fabric. With implementations of Infiniband architecture, the connections required between components will substantially decrease as compared to current systems. Multiple data types will exist on a unified fabric and multiple servers will "share" remote connections to storage and networking.
The media are filled with stories of tragedies of e-business sites that fail due to traffic overload. Downtime can now be calculated in millions of dollars per hour, creating a heightened demand for reliable Internet data centers. Infiniband architecture delivers increased reliability through a move-to-message-passing-based communication, redundancy at the link, switch and fabric level and easy hot-swap of fabric elements.
Through elimination of the error-prone load store transfer of data toward a message-passing model, interrupts and server stalls resulting from I/O errors are significantly decreased, if not eliminated.
With three levels of redundancy-link, switch and fabric-possible with Infiniband technology, reliability is achieved several times over. Redundant links provide for continuous data flow in case one link fails. Faults are easily isolated and repairs can quickly be made without any interruption of the service. With the addition of redundant switches to an existing fabric, another layer of redundancy is added. Redundant paths between switches ensure uptime even when a switch fails. For an optimal level of redundancy, Infiniband architecture delivers the capability to configure hosts and targets to redundant fabrics. This ensures fabric resilience even with the failure of multiple fabric elements.
Infiniband expansion
The Infiniband Trade Association plans to release version 1.0 of its specification to members in mid-2000. Many companies already have begun development work on Infiniband products. The first Infiniband prototype fabric was demonstrated at the Intel Developers' Conference and Microsoft WinHEC. Intel plans to deliver Infiniband architecture-compliant samples by the end of 2000 and industry solutions are expected in the second half of 2001.
The momentum behind the Infiniband architecture and products is growing fast. The ability to address the specific needs faced by the Internet data centers promises to launch Infiniband architecture into the realm of PCI and USB as a significant industry specification and a strategic point of change for the computing industry.
eetimes.com |