SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: Rob C. who wrote (83261)6/11/1999 12:31:00 PM
From: Paul Engel  Respond to of 186894
 
Rob - Re: "Additionally, Intel intends to continue with its slow Peripheral Component Interconnect (PCI) interface. "

This is COMPLETE and UTTER NONSENSE !

Intel has made a major investment in their NGIO switched fabric serial bus architecture and has already issued a Rev. 1.0 specification.

My understanding is that Intel is targeting NGIO chips and systems for a roll out late next year.

Whoever wrote that drivel must have their head stuck in a log.

Paul

{===================================}
devel.penton.com



ELECTRONIC DESIGN
May 3, 1999
Return to May 3, 1999 Issue Table of Contents



--------------------------------------------------------------------------------

Boards & Buses

Get Ready For Channel-Based Links In I/O Subsystems

Schemes Like NGIO and Future I/O, Which Target PC-Server Bottlenecks, May Also Influence Embedded Boards.

Jeff Child


Representatives of the Future I/O Alliance: From left to right, Tom Bradicich, director of Netfinity at IBM; Tom Werner, vice president and general manager at 3Com; Martin Whittaker, manager of the NetServer Line at Hewlett-Packard; Robert Selinger, vice president at Adaptec; and Mary McDowell, vice president and general manager at Compaq Computer. Photo courtesy of The Future I/O Alliance.

A major shift in computer I/O-subsystem architecture is on the horizon. As existing I/O schemes run out of steam, the time has come for channel-based, switch-fabric I/O. New schemes are emerging along such lines, which are expected to redefine the architectures of high-end PC servers. This technology could eventually infiltrate desktop PC and embedded computer designs.

The writing is on the wall. As microprocessors race to ever-faster speeds, the relative throughput capabilities of I/O subsystems are getting left in the dust. True to Moore's Law, processor speeds continue to double every 18 months, with 450-MHz processors available today, 500- and 700-MHz parts in the very near future, and 1-GHz chips by next year.

Meanwhile, external-bus clock frequencies aren't seeing nearly the same kinds of increases. Parallel I/O buses like PCI are going from 33 to 66 MHz, perhaps to 100 MHz, and maybe even to 133 MHz. They're just not keeping pace with the internal clock speeds of microprocessors.

The effect of this situation is that processors must wait, or flush their pipelines and caches, whenever they access an external device across a bus of any sort. Since the microprocessor isn't doing any work, then, time is wasted. "There are ways to work around that using DMA controllers in the host, bridge components, caching techniques, and optimization techniques," says Mitch Shults, director of platform marketing at Intel Corp. "We don't think those are the way to go long term. We think a fundamental advance in I/O architecture at the system level is required to eliminate these kinds of physical- and systems-architecture challenges."

One approach is to widen the bus, and thereby increase the bits transmitted from 32 to 64 bits and conceivably wider parallel I/O buses. That brings the external-bus speed up in line with processor speeds. It doesn't, however, address in any way latency issues associated with communication across that boundary.

When you move from internal to external I/O, there is a latency penalty that can't be overcome by brute-force widening of the bus. With that in mind, the industry has started to move away from further incremental enhancements to parallel bus technologies. And while there will be some of that in the short term, the long-term push is toward channel-based architectures for communications between the host and peripherals.

What's needed, then, is a switched-fabric, back-end I/O-channel infrastructure. Two emerging alternative schemes fit that definition. Developed by Intel and selected partners, the first is Next-Generation I/O (NGIO). It's a message-passing, switched-serial architecture to be released at 1.25 Gbits/s and 2.5 Gbits/s. A rival scheme, Future I/O, is an initiative by Compaq, IBM, Hewlett-Packard, Adaptec, and 3Com. During its first generation, Future I/O will use a four-pin communications protocol to create bidirectional links with an aggregate peak bandwidth of 2 Gbytes/s. The primary interconnect for NGIO is a 2.5-Gbit/s serial copper interface, while the primary interconnect for Future I/O is a 1-Gbyte/s interface using parallel copper cable.

Aside from NGIO and Future I/O, there is a third switched-fabric option that has been around for 10 years--FibreChannel. So why do we need newer developments when FibreChannel already is established?

One problem is FibreChannel's lack of remote-DMA support. "A switched-fabric connection needs to support two major things," says Larry Boucher, CEO of Adaptec. "One is block I/O for peripherals, sharing, and disaster recovery. The other is remote DMA (RDMA) for loosely coupled multiprocessing. FibreChannel didn't take RDMA into consideration."

It's conceivable that the FibreChannel Association could add RDMA capability to FibreChannel. But FibreChannel's trend toward tacking on new extensions is exactly the problem.

FibreChannel started out by defining a channel interface. Then, it added device and communications interfaces. As a result, FibreChannel devices are far from interoperable today. Tacking on RDMA would exacerbate the issue.

"The NGIO group or the Future I/O group won't say they're trying to replace FibreChannel," says an engineer who wishes to remain anonymous. "But how many back-end, switched infrastructures do we need? One way or another, these standards are going to converge. And I don't think they're going to converge around FibreChannel."

Among the other choices, NGIO is much further along than Future I/O. The Future I/O Alliance has crafted an architectural strategy, but it hasn't yet issued a Future I/O specification. Even so, Intel has described the NGIO specifications in some detail. Of the companies involved in Future I/O, Compaq, Hewlett-Packard, and IBM together proposed the PCI-X bus protocol to the PCI Special Interest Group (SIG) last year. The PCI-X protocol enables 64-bit, 133-MHz performance, for a total throughput of 1066 Mbytes/s.

Pros And Cons Of PCI-X Bus

The argument for PCI-X is that evolutionary enhancements to the PCI bus will satisfy I/O performance demands for the next two or three years. Both PCI and PCI-X are based on shared-bus technology. In a shared-bus architecture, as bus frequency increases, the bus length and number of electrical loads on the bus must decrease to maintain signal integrity. This is forcing system vendors to move the microprocessors, memory controller, and memory as close together as possible to accommodate each other. One of the goals of the new I/O schemes, therefore, is to decouple the I/O subsystem from this microprocessor/memory complex.

With its specification still under development, there are few concrete details available on Future I/O at present. The basic concept is for a point-to-point, switched-fabric interconnect. The Future I/O Alliance is aiming for the scheme to have performance that exceeds PCI-X at a cost comparable to existing PCI implementations.

At a component level, the Future I/O specification will employ a simple switch to connect existing I/O protocols (such as SCSI) to the system-area-network (SAN) interconnect. The initial Future I/O interconnect will be capable of 1 Gbyte/s per link in either direction, for a total of 2 Gbytes/s of bidirectional bandwidth. The Future I/O protocol will be embedded in the peripheral device circuitry, enabling it to communicate with the switch effectively.

To support the ability of SANs to connect modular building blocks, Future I/O designs are being established using three different distance models. For links from ASIC to ASIC, board to board, or chassis to chassis--distances under 10 m--Future I/O uses parallel copper etch within boards and parallel cables between chassis. Optical or serial copper cables are used for distances over 10 m.

In contrast to Future I/O, an NGIO specification already exists. It was developed by Intel, along with the primary early-development partners Dell Computer Corp., Hitachi Limited, NEC Corp., Siemens Information Communication Network Inc., and Sun Microsystems Inc. Several other companies also have been involved with developing NGIO, including Adaptec Inc., Fujitsu Ltd., EMC Corp., GigaNet Inc., LSI Logic Corp., Nortel Networks, Qlogic Corp., and Sequent Computer Systems Inc.

A host channel adapter (HCA) lies at the heart of the NGIO architecture (Fig. 1). The HCA is a bridge component that's linked directly into the system host's memory controller. The HCA has intimate knowledge of the host-processor/memory complex's internal protocols and cache status. Intel is building a chip implementing the HCA bridge, and companies like Sun plan on using the technology in future server designs. "We intend to have this part of the Intel chip set in pervasive use by the end of 2000," says Shults.



1. The heart of the Next-Generation I/O (NGIO) architecture, the host channel adapter (HCA), is a bridge that's linked directly into a system host's memory controller. The HCA links to the I/O units (slot boards) in a system through the switch.

Instead of using a parallel bus with parallel backplane slots, NGIO uses a switched fan-out scheme. A variety of switch configurations are possible. The HCA links to the I/O units (slot boards) in a system through the switch. NGIO's point is to provide as direct, efficient, and robust a link as possible between the I/O unit (slot board) and the memory subsystem of the host server. It creates a virtual, dedicated channel directly between main memory and I/O units.

The HCA builds on a concept called virtual-interface (VI) architecture. Several companies offer VI-architecture-based products for interprocessor communications. NGIO leverages the VI concept of using high-level command primitives. It uses only a few basic commands, including send, receive, remote-DMA read, remote-DMA write, and management primitives.

The next key part of the NGIO architecture is the Link. You can scale NGIO performance simply by adding Links to the HCA bridge component. The aggregate amount of bandwidth available is scalable.

The Link, which connects the server system to the external peripherals, defines the basic protocol for the NGIO scheme. Although it can operate in an unacknowledged fashion, devices more often will operate in an acknowledged fashion. Those acknowledgments are structured within a protocol in a highly pipelined scheme. That means that the system isn't waiting on the acknowledgement before sending out the next request.

The protocol is optimized specifically for communications between devices that are physically relatively close to each other--meters, as opposed to kilometers. NGIO isn't meant for wide-area-network (WAN) use. Initially, NGIO's serial physical technology will be copper. It will employ the same kind of PHY devices, connectors, and wires used in Gigabit Ethernet and FibreChannel environments.

Details of the Link protocol itself are an intriguing mix of complexity and simplicity (Fig. 2). From a high-level software view--driver or application code that's performing direct user-level I/O--the Link protocol specifies a hefty 4 Gbytes per packet. To make those packets easier to digest, the NGIO Link hardware has a built-in segmentation-and-reassembly (SAR) layer.



2. NGIO's Link protocol is a unique mix of simplicity and sophistication. The large, 4-Gbyte packet is broken into cells by the built-in segmentation-and-reassembly layer. The media-access-control (MAC) layer allows for addresses to a maximum of 64,000 devices.

All of the devices in an NGIO environment have to accommodate SAR. When a packet goes into the switch fabric, the SAR layer breaks down the packets into a collection of variable-length, fairly constrained cells. The cells aren't fixed-length ones, like in ATM. But they're constrained enough so that the buffers in intermediate devices don't have to be large.

The media-access-control (MAC) layer of the NGIO's Link protocol allows for addresses to a generous maximum of 64,000 devices. The MAC addresses are assigned dynamically at system-configuration time. Once assigned, they continue in operation until a system is reconfigured.

MAC-layer internals include cell-sequence numbers (CSNs) and packet-sequence numbers (PSNs). Together, they let every device in a system ensure that the right cell is being received in the right order. If it isn't, then the hardware automatically retries the transmission.

The next significant part of NGIO is the target channel adapter (TCA), which provides the interface between the fabric and the I/O-controller side of an I/O unit. The TCA specification provides server designers with flexibility, since these controllers do not have to be of the same type. Current TCA-specification plans have these I/O-unit types assigned particular membership groups. When an I/O unit comes online, the NGIO fabric discovers it and notifies the host server's CPU/memory system connected to the fabric.

The NGIO architecture lets a channel adapter connect directly to another channel adapter, or directly into a switch. The function of switches in the architecture is to create a fabric of switches. The switch fabric lets hosts and targets communicate with a large number of other hosts and targets. Moreover, switches interconnect the fabric links by relaying the cells between them, while the switches are themselves transparent to the end-points in the fabric.

Hoping to leverage switch-fabric, channel-I/O technology, the embedded-board/bus community is getting involved early in the these emerging specifications. In separate efforts, both the VME International Trade Association (VITA) and the PCI Industrial Computer Manufacturer's Group (PICMG) have plans underway to include these new I/O schemes in their VME and CompactPCI board specifications. Meanwhile, the two groups also are teaming up to craft mechanical specifications for board form factors specifically for NGIO.

VITA already has formed a task group, called VITA 31 High Speed Serial Architecture. This team is developing new architectures for VME-based NGIO boards and systems. According to Ray Alderman, executive director at VITA, the first step will be to define the VME P0 specifications for NGIO and create a VME/NGIO hybrid.

Meanwhile, both the Future I/O Alliance and NGIO Forums have approached PICMG for its help in Eurocard packaging. Furthermore, those groups hope to use PICMG as a liaison between the server world, which those groups represent, and the telephony and telecom world. "With its emphasis on CompactPCI, PICMG has developed strong ties with and knowledge of telecom and telephony applications," says Joe Pavlat, president of PICMG and director of strategic planning for Motorola's Computer Group. It's likely that PICMG will hang NGIO and/or Future I/O off a set of the CompactPCI's available I/O pins.

Another set of potential embedded users doesn't want a parallel bus on its NGIO board at all, nor does it need VME and PCI. That's where VITA and PICMG are combining efforts. These users are likely to utilize Eurocard-style (VME and CompactPCI) boards with 2-mm connectors. VITA already has a specification, VITA-29, which defines mechanical specifications for a Eurocard board with a 2-mm connector. "Now we need to look at the characterization of 2-mm connectors from the connector vendors to see how they behave at gigahertz rates," Alderman says.

It's too soon to predict which channel-based, switch-fabric I/O link will dominate. NGIO is clearly farther along, so it has a leg up on the competition. Yet there's a valid concern that NGIO's success gives Intel more control of computer architecture. And because NGIO handles more protocol stack-layers than Future I/O, systems designers using NGIO will have more control over the device-driver layer of software. That's not necessarily a bad thing, though. It will free software talent to put their creativity toward more vital efforts, such as application-code development. As Alderman says, "I've never heard of an engineer waking up in the morning all bright-eyed and bushy-tailed about going to work to write device drivers."

For more information on the Future I/O Alliance, see futureio.org. For more information about NGIO, go to ngioforum.org.

--------------------------------------------------------------------------------

Companies Mentioned In This Report
3Com Corp.
5400 Bayfront Plaza,
Santa Clara, CA 95052
(408) 326-5000
3com.com

Adaptec Inc.
691 South Milpitas Blvd.
Milpitas, CA 95035
(408) 945-8600
adaptec.com

Compaq Computer Corp.
20555 State Hwy. 249
Houston, TX 77070
(281) 370-0670
compaq.com

Dell Computer Corp.
One Dell Way
Round Rock, TX 78682
(800) 999-3355
dell.com

EMC Corp.
35 Parkwood Dr.
Hopkington, MA 01748
(508) 435-1000
emc.com

Fujitsu Limited
1-6-1 Marunouch,
Chiyoda-ku, Tokyo 100 Japan
pcserver@ing.hon.fujitsu.co.jp
fujitsu.co.jp

Giganet Inc.
2352 Main St.
Concord, MA 01742
(978) 461-0402
giganet.com

Hewlett-Packard
3000 Hanover St.
Palo Alto, CA 94304
(650) 857-1501
hp.com

Hitachi Limited
4-6 Kanda Surugadai
Chiyoda-ku, Tokyo 101-10 Japan
81-3-5471-8910
hitachi.co.jp

IBM Corp.
New Orchard Rd.
Armonk, NY 10504
(800) 426-4968
ibm.com

Intel Corp.
2200 Mission College Blvd.
Santa Clara, CA 95052
(408) 765-8080
intel.com

LSI Logic Corp.
1551 McCarthy Blvd.
Milpitas, CA 95035
(408) 433-8000
lsilogic.com

Motorola Computer Group
2900 S. Diablo Way
Tempe, AZ 85224
(602) 438-3025
mot.com

NEC Corp.
7-1, Shiba 5-chome Minato-ku, Tokyo 108-8001 Japan
03-3454-1111
nec.co.jp

Nortel Networks
P.O. Box 3511
Ottawa, Ontario KIY 4H7 Canada
(613) 763-5795
nortelnetworks.com

QLogic Corp.
3545 Harbor Blvd.
Costa Mesa, CA 92626
(800) 662-4471
qlc.com

Sequent Computer Systems Inc.
15450 S.W. Koll Pkwy.
Beaverton, OR 97006
(503) 626-5700
sequent.com

Siemens AG
Wittelsbacherplatz 2
D-80312 Munich, Germany
49-89-636-47005
siemens.de

Sun Microsystems Inc.
901 San Antonio Rd.
Palo Alto, CA 94303
sun.com
(800) 555-9786




Return to Top

Copyright © 1998 Electronic Design. All rights reserved.
Reproduction in whole or in part in any form or medium without
express written permission of Electronic Design Online is prohibited.