Trade Press - Spec aims to speed server I/O
By APRIL JACOBS Network World, 10/23/00
nwfusion.com
The InfiniBand Trade Association this week will deliver the first specification outlining its high-speed bus architecture, which is designed to pave the way for faster communications between clustered servers and help remove I/O bottlenecks between servers and network resources.
For customers, InfiniBand will mean no longer having to plug multiple adapters into servers for network, storage, and cluster connects. Instead, a connection to an InfiniBand-compliant switch will provide a virtual fat pipe, saving users from installing and maintaining complex I/O configurations. The result is a simpler way of letting server processors connect to peripherals, storage and other servers to gain greater performance.
This week's conference will also feature new InfiniBand products and the formation of a working group that will be charged with ensuring interoperability between InfiniBand products.
On the product side, IBM will announce a chip that powers InfiniBand switches due out late next year. Texas firm Banderacom plans to introduce a chip architecture to support InfiniBand storage, server and network products.
Driving the need for InfiniBand is the 1G byte/sec bus-speed limitation that even advanced PCI-X-based servers will have. InfiniBand technology will set up a switched fabric backplane that can handle 500M byte/sec to 6G byte/sec links and throughput of up to 2.5G byte/sec.
Using products that support the technology, the number of addressable devices on a network theoretically could hit 64,000.
By connecting multiple InfiniBand networks, the number of devices with addresses is virtually limitless, says Jim Pappas, a marketing director for the Intel Enterprise Products Group.
InfiniBand will support copper and fiber-optic cable links and let entry-level to high-end servers more easily adapt to transaction-based traffic on the network.
But InfiniBand technology isn't just for servers. A central InfiniBand I/O switch, which sits on the network to connect devices, could also be used to cluster servers, acting as the high-speed interconnect be-tween them.
IBM plans to announce three types of InifiniBand technology, including an App-lication Spe-cific Integrated Circuit chip that will power InfiniBand switches, a host channel ada-pter that connects the InfiniBand switch fabric network to a server, and a Target Channel Adpapter designed to connect I/O devices, such as a network or storage device to the switch fabric.
Banderacom will announce IBandit, an InfiniBand semiconductor architecture that will provide the foundation for a family of InfiniBand Channel Adapters. The comp-any also plans to announce a partnership with Wind River Systems, which makes software and services for Internet devices, to develop and supply InfiniBand transport software that will include support for Banderacom's IBandit architecture.
Several start-ups, including Cicada Semiconductor and INH Labs in Austin, Texas, are also developing InfiniBand technology.
New group formed
Meanwhile, Tom Bradicich, director of architecture and technology for Netfinity and xSeries servers at IBM, as well as co-chair of the InfiniBand Trade Association, says a newly formed workgroup to be introduced at this week's conference will determine how third-party products will work together using InfiniBand.
For customers, that means they could cluster servers from several vendors and deploy multivendor storage and network devices.
A bright future
Also at the meeting, Vernon Turner, a vice president at Framingham, Mass., research firm IDC, plans to present a market forecast for InfiniBand-enabled devices.
He says the total market will reach about $2 billion by 2004. That market represents the hardware opportunity for suppliers to develop and install InfiniBand technology, which includes servers, switches and other devices.
This means that of the approximately six million servers shipped in 2004, four million could be InfiniBand-enabled.
But InfiniBand has to co-exist with and help existing standards that power the industry's network, server and storage devices.
"InfiniBand's success relies heavily on cohabitating with the existing fabrics of PCI, Ethernet and Fibre Channel," Turner says.
InfiniBand could also boost reliability. For example, if a server's PCI bus fails, it goes down. But not so with InfiniBand. Because of the way it is implemented, it can provide multiple connection paths. If one thread or connection is lost, others can be accessed, similar to the way mainframes have access to multiple partitions.
Intel's Pappas says InfiniBand will give users a type of grid computing, similar to the electrical power grids that light up the country. If one source becomes unavailable, another can be accessed to take its place and fill bandwidth needs.
What InfiniBand will eventually replace is a legacy of shared bus technology that requires critical server and network components to share resources, which is not sufficient for today's data centers.
"As you add more connections, you don't share bandwidth, you add more. With today's bus technologies, each card has to take its turn, where InfiniBand allows each connection to talk at the same time," Pappas says.
InfiniBand Architecture: Bridge Over Troubled Waters (PDF) infinibandta.org
InfiniBand white papers intel.com |