So, what do the religious evangelical and George Gilder have in common? Everything, he says.
"To invest you need faith because you can't really know the future. That's how my religious vision merges with my technological vision. You can know all about software and how computers work but if you won't know what it's for if you don't have the source code."
EEtimes issue on interconnects.. The intro....
Buses keep pace by switching, serializing Jeff Child
For years the parallel bus I/O has been the plumbing at the heart of computer and embedded-computer systems. The PCI bus, for instance, has enjoyed years of success since its birth in the early 1990s. Recently, though, various serial interconnect schemes have begun to displace legacy parallel approaches. Serial schemes provide far more bandwidth and higher slot counts than parallel alternatives. At the same time, they add a host of new software and networklike complexities that computer system designers aren't used to.
The articles in this section explore the complexities, trade-offs and benefits of today's popular and emerging serial bus technologies. Serial schemes covered in the articles include USB, 1394, InfiniBand, Fibre Channel and low-voltage differential signaling (LVDS). Other new technologies that aren't strictly serial, such as RapidIO, are also dealt with. In fact, the serial nature of these buses is only one aspect of the technologies. So while "serial" is a convenient way to categorize them, in a more general sense these interconnect schemes represent a wave of modern I/O technologies that form the heart of today's electronic systems.
All these modern bus schemes have one important thing in common: Each is a successful industry initiative or has the potential to be one. That's no trivial point. It means that by and large, designers of peripherals and computer systems can develop products using these interface technologies with some confidence that they're not traveling down a dead-end street. The history of buses is crowded with such schemes, some of which never caught fire.
There's no crystal ball to help you see which technologies will thrive and which won't. But Jim Pappas, director of initiative marketing at Intel's Enterprise Platform Group, is, perhaps more than anyone in the computer industry, a credible authority to offer insight along those lines. Not only was Pappas involved in the early development of the PCI bus, he also played a key role in driving the advanced graphics port (AGP) and the Universal Serial Bus. He's now in the thick of ramping InfiniBand up toward widespread market acceptance.
According to Pappas, there are some important ingredients that go into crafting a successful technology initiative. The first is focus. "When we developed PCI we had a very clear problem to solve," said Pappas. "The problem of the day was that computer graphics couldn't effectively scale on the ISA bus. That was a huge problem. If we didn't solve that the market would have stalled. The world was moving to Windows and we didn't have an I/O structure that supported it. Now the world is moving to [the] Internet and we don't have a data center infrastructure that supports it."
Like PCI before it, InfiniBand is being developed with a clear purpose in mind. Internet data centers require scalability that traditional bus architecture just doesn't handle very well. "The whole purpose of InfiniBand, in my view, is how do [we] scale computer systems to the point where we [can successfully] meet the demands put on Internet data centers today," said Pappas.
With today's server clusters, every time you want to add something-say, another server-to your cluster, you have to open your box, plug in a bunch of PCI cards, and connect those cards to a bunch of different interfaces outside. You need Ethernet cables linked to your networking solutions. You need SCSI or Fibre Channel cables plugged into your storage solution. On top of that, you typically have some sort of proprietary interface for clustering so that the computers can talk to each other.
With InfiniBand all that goes away. You can take the PCI bus out of the system and have only InfiniBand ports. You then have a central InfiniBand switch into which you may plug as many servers as you want. If
you need to add storage, you just link a new storage array into that same central InfiniBand switch and now all of the servers have access to that storage array.
Likewise, if you want to add more Internet router capability, you take some Ethernet adapters and plug them into a common switch, and once again that resource becomes available to every computer that's connected. Pappas' article lays out the particulars of InfiniBand and its benefits.
Qlogic's Rob Davis also looks at InfiniBand, but from the implementation perspective. Davis' article explores the benefits of InfiniBand for clustering and high-reliability system designs.
USB is another bus that originated with a clear purpose. Also involved in driving that bus's acceptance, Intel's Pappas said that keeping that focus was a challenge at times. "We kept USB focused as a point-to-point bus for attaching very, very simple dumb peripherals to very powerful PCs. We were doing that because the peripheral market needs to be low-cost. So we avoided feature creep and we kept it focused."
When USB was being developed, said Pappas, everyone and his brother asked him to add another twist to it. Some said they needed a peer-to-peer bus or that they needed it to look more like a network. Another company wanted more power because they wanted it to drive high-powered devices. Another company wanted a watertight connector so the bus would work in underwater dive computers. It was one thing after another. "We just kept saying no," said Pappas. "If somebody wanted something else outside of that, then USB was the wrong technology. We very much stuck to that. I think that focus was very important to USB's success."
Lucent Technologies' James Clee takes the pulse of today's USB in his article. The new 2.0 version of USB revs the bus's bandwidth up to 480 Mbits/second, a 40-times improvement over its predecessor, the 12-Mbit/s USB 1.1. The increase in speed enables the PC to perform tasks once considered impractical for USB. High-density digital images now can be downloaded in seconds instead of minutes. Videoconferencing can be conducted without using all of the bus bandwidth.
For its part, the IEEE 1394 bus, also known as FireWire, has had a rockier history than its cousin, USB. IEEE 1394 languished in specification mode for over a decade. It's an example of a technology that started off without a clear focus. Some saw 1394 as a serial replacement for SCSI. Others saw it as a digital audio/video connection. For others it was to be the "home-networking" bus.
Now that the market has sorted itself out, the consensus is that 1394 is the best consumer electronics A/V interconnect. "It's succeeding now because of that," remarks Pappas, "But when it was trying to do everything, it wasn't as successful. I use the Swiss Army Knife example. A Swiss Army Knife does 10 things poorly. What's better is something that does the job you need to do better than anything else."
Under the 1394 hood
Two articles in this section delve into 1394. James Clee's article examines the state of today's 1394 technology and explores its trade-offs and benefits. Meanwhile, James Snider of Texas Instruments Inc. looks at how the market landscape for 1394 is changing and how those changes are influencing product development.
Among the emerging high-speed serial bus technologies, InfiniBand has probably had the most significant impact on computer architecture. It replaces shared buses, like PCI, with switched-fabric architecture. At the most fundamental level, shared buses share a string of copper links. By design, only one device can talk on those wires at any given time. Shared buses integrate protocols to determine which device wants to talk and which gets on the bus next. And, because only one device can talk at a time, there's a bottleneck. In fact, the more cards you plug in, the less access any one card has to the bus, because it's got to share that resource.
In contrast, InfiniBand is a very different kind of architecture. It's a switched-fabric scheme where every other intersection is available to switch data at exactly the same instant. Rather than a set of copper traces that are shared with everyone, each has its own dedicated inputs and outputs, allowing many transactions to happen at once. In fact, adding more devices to a fabric creates more possible connections, which produces more bandwidth. By contrast, the trend in shared buses is for slot counts to decrease as speeds rise.
Mike Jenkins and Rich Hovey of LSI Logic Corp. (Milpitas, Calf.) give a detailed analysis of why the industry is moving over to serial interconnects. Apart from architectural considerations, parallel interconnects are running up against fundamental physical limits. Higher and higher clock speeds are making issues such as clock skew and wire-to-wire electromagnetic interference major physical barriers. Even if designers can find solutions to these problems, those solutions are becoming too complex to make the effort worthwhile.
But higher clock speeds begin to make serial interconnect feasible, and the basic topology is far simpler. Serial interconnects also reduce the burden on I/O hardware that is created by high pin counts.
Robert Krauss of Innova Semiconductor (Munich, Germany) cautions that the impressive data rates advertised for various serial schemes may not actually be realized in practice. One hidden problem, he points out, is that distance plays a crucial role in performance. Also, some serial buses are not strictly a single-wire system, but multiplex something like 36 channels onto four serial channels. The residual parallelism in that type of scheme can still introduce clock skew and interference problems.
Before plunging into a new design with the specs for a serial bus in hand, he recommends that designers go through a checklist to ensure that the specific connection solution will deliver the required performance level.
As they proceed, designers need to balance a complex set of issues. Burst mode vs. sustained throughput, future scalability and adaptability to existing I/O hardware are common considerations that can easily trip up a design. The I/O pins represent an important physical barrier that can have a serious impact on interconnect performance, he points out.
Also, even short serial interconnects introduce significant jitter above the gigahertz level, since self-induction on the wire produces an unpredictable filtering operation that depends on frequency.
As serial-interconnect designers get more experience with high-speed signaling, they may be able to head off some of those problems. The new InfiniBand standard is one new entrant that is benefiting from earlier efforts. As a sophisticated switched fabric, for example, it automatically solves the scaling problem for designers.
InfiniBand isn't the only emerging interconnect technology that's embraced the switched-fabric concept. RapidIO, a proposed specification that came out of work done by Mercury Computer Systems and Motorola, is another. A RapidIO link consists of 8 or 16 data signals, a clock signal and a frame signal. These signals are duplicated-one set for transmits, one set for receives, for a total of 20 signals for the 8-bit interface. Because LVDS is used, each signal requires two wires. As a result, the minimum 8-bit interface requires 40 wires.
In his article, Richard Jaenicke of Mercury Computer Systems steps through the particulars of RapidIO and how it fits into the existing legacy of PCI. Sky Computer's Steve Paavola puts RapidIO in perspective in his article by comparing it to Infiniband. Paavola's story also looks at LVDS. LVDS became popular years ago for driving the LCD display on laptop computers. Both InfiniBand and RapidIO use LVDS as their fundamental electrical scheme.
For its part, National Semiconductor embraced LVDS as an I/O bus technology three years ago. It's BLVDS (bus low-voltage differential signaling), along with serialization, offers a way to do small, fast interfaces across cables and backplanes. This concept applies not only to connections from box-to-box, point-to-point or in multidrop applications but also to connections within a box. National's James Chang discusses the benefits of BLVDS while comparing the multidrop, multipoint and switched implementations of the technology in his article.
At last month's Intel
Developer Forum, Intel tipped its plans for rolling out InfiniBand silicon and other development aids. Intel said it plans to build all three major types of silicon that are used in InfiniBand fabrics. The company will initially be introducing a suite of three new hardware components intended to serve as the heart of the InfiniBand architecture. The system components are designed for the IA-32 and the newer Itanium server platform.
InfiniBand switching fabric
An InfiniBand host channel adapter will connect servers to the InfiniBand switching fabric for Internet services. An InfiniBand switch will allow OEMs to build the InfiniBand switching fabric and connecting servers to remote storage and networking devices. The target channel adapter will establish the connection between devices within an InfiniBand fabric.
These products will be appearing next year, the company said, and will help to cement the specification with available components for system integrators. In addition to offering high-speed I/O the components will ease the problem of scaling up interconnect systems. These chips will be made available in the first quarter of next year.
At the IDF Intel also revealed other plans for InfiniBand. Pappas called these efforts the most comprehensive enabling plan that Intel has put together on any of its technologies. Part of the plan includes Intel's Port Logic Program, where companies will be able to license the same circuits that Intel is using to do InfiniBand.
"We're anticipating a lot of demand for that as well. It will be very economical," said Pappas. He predicted that it will be less expensive for most companies to license the circuit specs from Intel than to design it from scratch.
Another part of Intel's plan for enabling InfiniBand revolves around driver software. Intel's done a lot of software work, and there are a lot of drivers that have been written for systems using Intel silicon. Companies have requested that Intel publish or make available the software interface layer so that-particularly on the target side-people who build silicon can take advantage of the drivers and other software developed for Intel chips. Intel has decided to do just that.
Intel will also be rolling out tools for putting systems together. A product development kit will include host channel adapters, an InfiniBand switch and supporting software. A licensing program will allow silicon suppliers to license the interface logic to provide second-sourcing support.
In addition, supporting software is under development to provide channel adapter vendors with the interfaces compatible with Intel products.
Already, other companies are getting into the InfiniBand game. Recently, Lucent Technologies Inc. announced plans to enter the InfiniBand arena with a four-channel serializer/deserializer. The 2.5-Gbit/s part uses a 0.13-micron process and features a flexible architecture that will allow it to also be used in Fibre Channel, Ethernet, and 1394 interconnects.
The design is also ambitious in integrating both analog and digital circuits running on a 1.5-volt power supply. As a result, a four-channel chip running at full speed only dissipates less than 1 watt.
Pappas expects to see a big strong market out there for InfiniBand chips. The new spec is arriving just in time for a major shift toward serial solutions.
"My job is to make initiatives succeed," he said. "With InfiniBand I want to create something that's new and open and creates a thriving industry around it. I feel really good about the initiatives that I've driven in the past. I think InfiniBand is going to be the biggest one . . . that we've ever done."
eetimes.com
Lots of other articles.... .. Intel/Iband/data centers...
techweb.com
By offering the flexibility to detach from the CPU-memory complex, InfiniBand enables a new level of server density. Sharing peripherals across multiple servers reduces the physical space required to connect large volumes of servers with multiple storage and communications devices. With InfiniBand, data centers can be more compact.
Data centers can also be more reliable, thanks to the architecture's multiple levels of redundancy. Because nodes are connected to multiple links, systems continue to perform even if one link fails because of aggregation of links. Completely redundant fabrics can be configured for the highest level of reliability and redundant fabrics continue to perform even if an entire fabric fails.
rd.yahoo.com*http://cbs.marketwatch.com/archive/20000919/news/current/net_sense.htx?source=blq/yhoo&dist=yhoo |