SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Son of SAN - Storage Networking Technologies

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: J Fieb who wrote (1989)5/27/2000 9:06:00 PM
From: J Fieb  Read Replies (1) of 4808
 
On Infiniband....Getting behind on the news here..
On the server I/O front, developments are less clear. While Intel has requested fast I/O support from the industry, specific technologies have been left up to third parties. While Intel has helped spec out more than 30 different designs for Itanium-based servers and workstations, it has imposed no real restrictions on OEM vendors in terms of product differentiation. For systems running four-way, eight-way or even 32-way SMP Itanium designs, OEMs have developed significant enabling technology of their own to set their products apart-especially in the areas of storage and server I/O.

Intel hasn't completely ignored these ancillary hardware components, however. On the I/O front, for instance, it is a charter member of the InfiniBand Consortium along with Compaq, Dell, Hewlett-Packard, IBM, Microsoft and Sun Microsytems. InfiniBand's architecture will be based around channel-based switch I/O paths, which should allow it to scale independent of processor and operating system. While you shouldn't expect to see InfiniBand products showing up inside initial Itanium server offerings, the consortium has already demonstrated beta-level hardware and is slating product releases for the first half of 2001.

Initially, look for the PCI bus to maintain its place on Itanium motherboards

From INTC...

Infiniband provides a new vision
Jim Pappas, Director of Initiative Marketing, Enterprise Server Group, Intel Corp., Hillsboro, Ore.

New technology standards are created every day. Some, as is the case with PCI and USB, are extremely successful; others are not as well-received.

Will Infiniband architecture follow the likes of PCI and USB, or will it be another example of a technology with too much hype and too little delivery? Based on the need for a common switched fabric for servers, remote storage and networking, Infiniband technology promises to be a strategic inflection point for the computing industry, removing the bottlenecks created by today's shared bus I/O architecture.

As the Internet continues to demand greater performance and reliability from data centers, a new vision of distributed computing is emerging, where servers, remote storage and connections to local and wide-area networks exist as peers in a unified fabric. To deliver this vision, a new interconnect for the Internet data center is needed: Enter Infiniband architecture, a switched fabric interconnect technology primarily designed for connecting servers, remote storage and networking together, providing low latency direct access into host memory and delivering more flexible and effective data centers.

Current system architecture possesses inherent performance and reliability limitations that are centered around the dependence on a shared bus. Shared buses have historically proved to be extremely successful. However, as performance and reliability demands continue to escalate and more applications become I/O bound, systems are reaching a point where the traditional load store transfer of data creates a bottleneck that affects data center performance. The shared bus creates contention between devices accessing memory and limits system performance through I/O interrupts to the processor. Shared buses also create physical restrictions for system design, limiting I/O to the number of slots configured within a server chassis-once the chassis is "full" another server must be added to create more I/O-attach points. With the growth of storage area networks and clustering, and the escalating demand for slots within the chassis, the problem is exacerbated.

With Infiniband architecture, the center of the Internet data center shifts from the server to a switched fabric. Servers, networking and storage all access a common fabric through serial links. All three types of devices can scale independently based on system requirements. It becomes possible to remove the I/O adapters from the server chassis with the flexibility to locate I/O remote from the CPU-memory complex. Infiniband links provide scalable connections between fabric nodes and offer direct paths to server memory, removing the bottleneck of today's load store topology. This direct path eliminates today's I/O interrupts to the processor, increasing system performance. Instead of contending for access to the shared bus, dual simplex serial links running at 2.5 Gbits/second connect directly to memory via a host channel adapter.

The decoupling of I/O from the CPU-memory complex also enables the creation of a unified fabric for clustering, networking and storage communication. The Infiniband fabric allows for peer-to-peer data flow, such as direct communication from the local area network to remote storage devices. The server is removed from the role of arbiter between disparate networks. Storage topologies such as Fibre Channel and SCSI, and communication topologies such as Gigabit Ethernet and ATM connect seamlessly into the "edges" of the fabric through target channel adapters.

Performance within an Infiniband fabric begins with a four-wire 2.5-Gbit/s link. When connections are added to the fabric, the performance of the fabric increases. Aggregate throughput is measured by the bisectional bandwidth of the fabric and is not gated by shared bus limitations. To scale performance, the architecture offers the flexibility to aggregate multiple links-for example, two 2.5-Gbit/s links aggregate to 5-Gbit/s potential throughput. The architecture also supports multiple link widths including x4 and x12 implementations, which offer 10 Gbits/s and 30 Gbits/s, respectively.

A single Infiniband fabric can scale up to 64,000 individual nodes. Additionally, arbitrarily large fabrics can be created by connecting multiple subnets-up to 64,000 nodes each-via Infiniband routers and through IPV6 addressing.

The Internet is creating explosive growth and increasing demand for server density. As Infiniband architecture delivers on the promise of increased server density and modularity, the architecture also creates a more manageable rack interconnect. With the removal of I/O cards from the server chassis there are no longer multiple cables of different types connecting to the server. Instead, these are replaced by Infiniband links that connect the server to an Infiniband switch and onto remote nodes, which are also communicating through the switched fabric. With implementations of Infiniband architecture, the connections required between components will substantially decrease as compared to current systems. Multiple data types will exist on a unified fabric and multiple servers will "share" remote connections to storage and networking.

The media are filled with stories of tragedies of e-business sites that fail due to traffic overload. Downtime can now be calculated in millions of dollars per hour, creating a heightened demand for reliable Internet data centers. Infiniband architecture delivers increased reliability through a move-to-message-passing-based communication, redundancy at the link, switch and fabric level and easy hot-swap of fabric elements.

Through elimination of the error-prone load store transfer of data toward a message-passing model, interrupts and server stalls resulting from I/O errors are significantly decreased, if not eliminated.

With three levels of redundancy-link, switch and fabric-possible with Infiniband technology, reliability is achieved several times over. Redundant links provide for continuous data flow in case one link fails. Faults are easily isolated and repairs can quickly be made without any interruption of the service. With the addition of redundant switches to an existing fabric, another layer of redundancy is added. Redundant paths between switches ensure uptime even when a switch fails. For an optimal level of redundancy, Infiniband architecture delivers the capability to configure hosts and targets to redundant fabrics. This ensures fabric resilience even with the failure of multiple fabric elements.

Infiniband expansion

The Infiniband Trade Association plans to release version 1.0 of its specification to members in mid-2000. Many companies already have begun development work on Infiniband products. The first Infiniband prototype fabric was demonstrated at the Intel Developers' Conference and Microsoft WinHEC. Intel plans to deliver Infiniband architecture-compliant samples by the end of 2000 and industry solutions are expected in the second half of 2001.

The momentum behind the Infiniband architecture and products is growing fast. The ability to address the specific needs faced by the Internet data centers promises to launch Infiniband architecture into the realm of PCI and USB as a significant industry specification and a strategic point of change for the computing industry.

eetimes.com

LSI says.......


Search Home Advanced Search Search Help


May 22, 2000, Issue: 1114
Section: Communications -- Focus: Servers, Routers, Switches
--------------------------------------------------------------------------------
Advanced I/O still comes up short
Deepal Mehta, Business Unit Manager, Computer Servers, William Lau, Senior Manager, CoreWare Development & Applications, Internet Computing Division, LSI Logic, Milpitas, Calif.

The Internet and burgeoning e-commerce applications are placing major demands on server I/O performance. The PCI and PCI-X buses are expected to meet today's I/O requirements. Infiniband, which is next-generation I/O technology, promises higher bandwidth and scalability to meet emerging I/O needs. It also offers better reliability and quality-of-service (QoS) capabilities. Yet even this advanced I/O technology comes with its own set of system design issues and challenges.

Against this backdrop, there's also the problem that I/O performance isn't keeping pace with CPU performance. Server OEMs are focusing on I/O and memory subsystem designs to differentiate their server platforms.

Consider that designers still face signal-integrity and timing issues designing PCI I/O at the 66-MHz level. The PCI-X specification permits the I/O design to move to a maximum of 133-MHz, 64-bit performance. That is double the bandwidth of the 66-MHz, 64-bit PCI bus, resulting in 1-Gbyte/second performance.

Still, issues involving the areas of timing performance and signal integrity continue to arise. Here, the I/O interface is pushed to the limit. For example, the clock to Q output propagation delay is 6 nanoseconds. The PCI-X spec calls for a 3.8-ns path delay with basically the same logic element. Setup time is 3 ns for the PCI 66 and only 1.2 ns for PCI-X. Also, there is a 7.5-ns cycle time between input and output registers for the core logic to propagate. That core can easily have 15 to 20 levels of logic. So core logic timing is more relaxed due to the PCI-X protocol. But the I/O is much tighter.

The PCI-X 3.8-ns clock to Q output delay means the PCI clock comes into the ASIC device through the input receiver, to a phase-locked loop (PLL), clock driver, on to the clock input of the output latch. That output flip-flop then goes through a JTAG mux and finally through a PCI-X driver. This path not only includes the delay of each element, but it also includes PLL jitter, PLL offset and clock skew. All these must be achieved within 3.8 ns to meet the PCI-X spec.

Critical timing

Setup time is the other critical timing area. Data enters the ASIC, goes through the input receiver, through an input JTAG mux and on to the input register for clock arrival setup. These data transactions must be achieved in only 1.2 ns, and must also account for PLL jitter and clock skew.

Perhaps the most critical issue is the design of the PCI-X I/O circuit's input/output receiver path. The driver definitely cannot be too slow. Otherwise, the designer cannot meet timing closure. But the tricky part is that it cannot be too fast either, because in a high-speed system the designer must pay close attention to signal integrity and noise problems. The PCI-X bus by design is a noisy bus since the signaling is based on the reflection of the transmission line wave. Plus, it is a long unterminated bus.

When designing the driver, the design engineer must consider the signal integrity and overall timing budget of the system, as well as the ASIC. For instance, let's consider the ASIC design and the driver part of the delay path. If the driver operations consume 3 ns, only 0.8 ns remain for the receiver. So the burning issue is how do you budget the delay for the driver and for the receiver? The design engineer has to consider that issue carefully.

On the other hand, the designer can make the buffer fast by reducing the propagation delay. However, lower propagation delay typically results in a higher slew rate, which makes the buffer noisy. The designer needs a balance between propagation delay and slew rate. Signal-integrity issues, therefore, must be taken into consideration to make sure the buffer falls within the PCI and PCI-X specs and not be too noisy. At times, a buffer design can fully meet the PCI-X spec, but can still be very noisy. It can still have high current due to the pre-driver stages. In that regard, it is important for the designer to have sufficient previous PCI design experience to execute a well-balanced circuit.

Infiniband uses point-to-point serial interfaces. That approach provides several benefits at the physical layer over parallel multidrop technologies that have been used in I/O buses traditionally (e.g., PCI and PCI-X):

In a point-to-point signaling environment there is only one driver (at one end of the wire) and one receiver (at the other end of the wire). This results in a clean transmission path where signals may propagate down the wire at high speeds with minimal distortion and degradation.
>Multidrop buses, on the other hand, have multiple drivers and receivers hanging off the same wire at different points along the wire. Each driver and receiver introduces transmission-path imperfections (such as parasitic inductance and capacitance). Those imperfections result in signal reflections, distortion and attenuation. Consequently, the maximum speed at which you can drive a signal in a multidrop environment is lower than in a point-to-point environment.

Infiniband employs serial connections with embedded clocks and lane de-skew (for x4 and x12 Infiniband) on the receive side. With this architecture, you don't need to worry about clock-to-data skew because the clock is actually embedded in the data. With lane de-skew you don't need to worry about skew between data lines (as long as you keep that skew within what is specified in the Infiniband standard) because any skew that has been introduced is eliminated on the receive side through realignment techniques. These two features let you go longer distances because you don't have to carefully manage the skew between clock and data, and between data lines.

Server designs

From another front, the designer must factor in the effects next-generation process technology will have on server I/O designs. PCI and PCI-X are parallel buses that require a number of I/O pins. The PCI-X device requires 64 data and address pins and another 16 command pins for 1-Gbyte/second bandwidth. As semiconductor technology moves to 0.18 micron and below, the number of pins required by PCI-X may make the design pad limited. This limits the die-size reduction achievable through next-generation process technology. The Infiniband standard, on the other hand, is a serial technology and thus offers considerably more bandwidth with a lower number of pins.

Essentially, Infiniband is a switch fabric-based architecture. It decouples the I/O subsystem from memory by using channel-based point-to-point connections rather than a shared bus, load and store configuration. The interconnect uses a 2.5-Gbit/s wire-speed connection with 1, 4 or 12 wire link widths. One wire provides theoretical bandwidth of 2.5 Gbits/s, four wires provide 10 Gbits/s and 12 wires provide 30 Gbits/s bandwidth. Hence, Infiniband provides system designers scalable performance through multiline connections, as well as a host of interoperable link bandwidths.

In an Infiniband server node, there is an Infiniband I/O bridge rather than the PCI I/O bridge. Also known as an Infiniband host-channel adapter, the Infiniband bridge generates Infiniband links, which are then connected to the Infiniband network, which may consist of an inter-processor communication (IPC) network, or storage area network (SAN) or local-area/wide-area network (LAN/WAN) (Fig. 1). Various I/O subsystems like Ethernet, Fibre Channel, SCSI or even interprocessor communication communicate through the Infiniband network switch fabric.

Migrating a PCI-X server I/O design to Infiniband involves several considerations. A key decision is whether to embed the serial transceiver in the ASIC or keep it external as a separate PHY. Of prime importance is a power analysis of the device since an embedded transceiver adds power consumption. On the other hand, embedding the serial transceiver in the ASIC requires fewer pins, which may lead to smaller die-size and package option and hence lower cost. So here the designer is dealing with cost/power trade-offs.

The designer also has to evaluate power consumption in terms of packaging and carefully determine the type of package best suited to the transceiver, whether it be a standalone or embedded device.

Another consideration is the number of wire implementations the design requires. Is it 1, 4 or 12? A big part of this consideration comes from matching bandwidth with memory bandwidth on the server node, as well as with the performance the designer is attempting to achieve.

Interoperability is key to any new technology. Infiniband is no exception and poses certain challenges to the designer to make sure the server I/O Infiniband subsystem interoperates with other subsystems in the enterprise.

End-to-end interoperability from host-channel adapter to storage or networking element through the Infiniband fabric is very important. It is also important that the selected operating system support Infiniband.

Infiniband is still in its infancy as a new technology, and the spec is still evolving. So, rather than hardwiring a device, the designer may choose to have an embedded microcontroller within the chip to gain extra programmable intelligence.

Spec evolves

Otherwise, with a state machine implementation, the system designer must keep changing it as the spec evolves. But with the programmability a MIPS or ARM microC core provides, a designer can be more flexible with implementing the Infiniband protocol. Embedded intelligence in the target devices also provides the traditional function of minimizing the interrupts, reducing bus traffic and offering a better error-recovery mechanism.

Signal integrity is another area the designer must evaluate. Since Infiniband touts a 2.5-Gbit/s wire speed, signal integrity is paramount. Serializer/deserializer (Serdes) that operate at these speeds often use expensive bipolar, GaAs or BiCMOS process technologies. Few chipmakers offer Serdes using standard CMOS process technology, and even fewer can embed this Infiniband technology in a large ASIC and do it cleanly with good signal integrity.

In addition, having a robust receiver that can receive signals without errors that have been attenuated or distorted is important.

Some design camps want to completely switch to Infiniband. Other OEMs don't believe PCI-X will go away soon and believe that PCI-X and Infiniband will co-exist until the end of this decade. Hence, there will be system engineers who will opt for a hybrid system that supports both PCI and Infiniband. This calls for a system architecture that supports PCI-X for legacy functionality, as well as Infiniband.

eetimes.com

Copyright © 2000 CMP Media Inc.



New from Planet IT
Removable storage is beginning to go mainstream.
Our roundtables let you chat one-on-one with industry
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext