Review: Gigabit or ATM for the backbone?
By Edwin E. Mier
11/17/97
Some early poking and prodding of Gigabit Ethernet shows that the new high-speed network technology, while still not in prime fighting condition, already holds up pretty well against ATM. But looking forward, it seems Gigabit Ethernet's state and stature can only get better. And that's likely to come at ATM's expense. These are the key results of our lab test pitting an ATM switch from FORE Systems, Inc. against a Gigabit Ethernet switch from Foundry Networks, Inc., tested with network interface cards (NIC) from Packet Engines, Inc. The purpose of the test was not so much to measure speeds and feeds, but to gauge where each technology best fits in an enterprise net. As a backbone network technology, ATM and Gigabit Ethernet each exhibit some clear advantages and drawbacks. For now, ATM still comes out ahead in a majority of the criteria, including the scope of vendor and product support, price, flexibility in transmission media and distances supported, ability to efficiently handle different traffic types and qualities of service, and multivendor interoperability. But based on our tests, Gigabit Ethernet already has taken the lead over ATM in two areas. First, Gigabit Ethernet unquestionably is easier to deploy. Second, it generally is more backward-compatible with today's applications, traffic types and software interfaces. That's not surprising, because Gigabit Ethernet does pretty much everything Ethernet does, except it's a whole lot faster. Other areas are toss-ups. There's no compelling reason to conclude that either technology handles, or will handle, network-layer switching or virtual LANs better than the other. Some may believe ATM's connection-oriented nature, featuring dynamic switched virtual circuits, is better suited to IP switching. Indeed, this may end up one of ATM's strong suits. But you also could argue that Gigabit Ethernet, with its descendancy from Ethernet and Fast Ethernet, has a leg up in supporting VLANs that concurrently span all three generations. In either case, it's too early to call. While ATM still has the edge in most areas of our comparison, its lead is likely to narrow over the next six months. It's possible that several of ATM's advantages - scope of products, per-port price and multivendor interoperability - could be lost to Gigabit Ethernet within the next year or so. But ATM likely will retain its edge with regard to transmission media and distances supported, however, at least for a while. Don't expect Gigabit Ethernet links to run for more than a couple of hundred meters or to run over Category 5 cabling any time soon. If you accept full duplex as your main Gigabit Ethernet link mode and multimode fiber as your main transport medium, you can build and run a client/server network over Gigabit Ethernet today. You can even get some vendors' Gigabit Ethernet products to work together, as we did with our Packet Engines NICs and Foundry Networks TurboIron switch. This most common transmission mode of Gigabit Ethernet - employing conventional 62.5/125-micron multimode fiber with a relatively short wavelength, typically LED-generated light source - is called 1000Base-Sx. There are other modes of optical transmission now on the drawing board in Gigabit Ethernet standards circles that will eventually extend the link distance to beyond 500 meters using different, less common types of fiber and light sources, including lasers. But most of today's Gigabit Ethernet products can span just 260 meters. That's probably enough for LANs contained within a single building, but it could be a problem for campus LANs.
Is Gigabit needed? For connecting servers to the backbone, we did some benchmarking of Windows NT 4.0 servers to see how 155M bit/sec ATM and 1,000M bit/sec Gigabit Ethernet links compared. Our conclusion: It is extremely unlikely that your NT servers are choking on their 'meager' full-duplex, 100Base-T network connections today. We had to get the absolute latest and most powerful Pentium IIs and cook up some superspeedy, memory-to-memory data transfers to get more than 100M bit/sec moving between an NT server and an NT Workstation client over the network. But that's just today. If your next major NT server hardware upgrade is a Pentium II, 266 MHz or faster, you may really benefit from a network pipeline greater than 100M bit/sec. That's because the newer 64-bit PCI buses that are just now emerging with the Pentium II, in combination with the higher speed processors, can move more than 100M bit/sec over the network link. Our testing found that the 32-bit PCI bus of most existing Pentium-based servers generally cannot. We ran a series of data transfers between NT servers and workstations. We primarily used a public domain software tool, called T/TCP, that runs in memory and generates the data that is sent. There's no disk I/O or movement of actual files involved, because this would slow throughput considerably. We also conducted File Transfer Protocol (FTP) transfers of real files. But these, too, were memory to memory without disk I/O. We changed the network connectivity between these platforms to test ATM, 100Base-T and Gigabit Ethernet backbones. Only with the artificial T/TCP traffic generator tool, and using some of the most powerful Pentium stations made today, could we move data between NT stations at a rate greater than an ATM 155M bit/sec link could handle. In this surrealistic environment, we achieved a maximum data throughput rate of 168M bit/sec over Packet Engines' Gigabit Ethernet NICs. Using an FTP memory-to-memory transfer, we were able to exchange data at 121M bit/sec between NT servers and workstations over Gigabit Ethernet. This is technically more than 100Base-T can handle. But keep in mind that we used memory-to-memory FTP, although in real-world cases, FTP involves disk reads and writes, which would constrain the total transfer time. When we ran the same FTP exchange over a 100Base-T network (via 3Com Corp. 100Base-T NICs and a 3Com 100Base-T switch) for comparison purposes, we got only 56M bit/sec throughput, or less than half of the throughput we achieved over Gigabit Ethernet. In short, we found that the network I/O of today's NT servers can, under certain contrived conditions, slightly exceed what a full-duplex 100M bit/sec link can handle. In most real-world traffic environments, however, you're not likely to need more than a full-duplex, 100M-bit/sec pipe to connect today's Pentium servers with 32-bit PCI buses to your backbone network.
Backward-compatible? An advantage ATM has over Gigabit Ethernet is ATM's ability to operate with a maximum packet size, or maximum transmission unit (MTU), greater than the 1,518 bytes of the original Ethernet, which Gigabit Ethernet also is largely bound by. By using an MTU of 9,000 bytes, we found you could get a 10% to 15% increase in file transfer throughput with ATM. That advantage is lost, however, if your ATM backbone is supporting an emulated LAN that includes any Ethernet or Fast Ethernet devices or segments. In that case, all the devices participating in the emulated LAN, including those connected directly to ATM, are constrained to using the same 1,518-byte MTU. But we found that Gigabit Ethernet has an inherent advantage over ATM, too. The amount of traffic on your network that's attributable to protocol overhead can be significantly less with Gigabit Ethernet (and with Ethernet and 100Base-T, for that matter) than with ATM. How much less? Well, here's where Gigabit Ethernet's structural similarity with Ethernet comes into play. If network and protocol overhead accounts for 8% to 10% of all the bits on your Fast Ethernet network, it would be virtually the same on Gigabit Ethernet. We verified this by comparing the size of files that were transferred over Ethernet, Fast Ethernet and Gigabit Ethernet with the total number of bits that appear on the network in conjunction with each file transfer. On ATM, however, the amount of overhead could be twice as much - accounting for, say, 15% to 20% of an appreciably greater total number of network bits. This was the case for files we transferred from an NT server on a 155M bit/sec ATM link to a Windows 95 client on a 100Base-T segment. The overhead in this case was in part attributable to the inherent inefficiency of transporting packet data over an ATM Ethernet-emulated LAN via LAN Emulation 1.0. What lessons did we learn from this preliminary side-by-side tire-kicking of ATM and Gigabit Ethernet? There are several, including: For connecting Windows NT servers to your backbone, you're unlikely to require more than 100M bit/sec of connection bandwidth for most real-world traffic. We could achieve throughputs exceeding 100M bit/sec only by using memory-to-memory data transfers and top-of-the-line Pentium II/266-MHz NT platforms with 64-bit PCI buses and Gig-abit Ethernet NICs. And the 160M bit/sec we generated doesn't even begin to tax the rated capacity of 1,000M bit/sec Gigabit Ethernet. There can be appreciable throughput variability by virtue of different vendors' server NICs, regardless of the network technology. And ATM can involve more overhead than Fast Ethernet or Gigabit Ethernet, meaning more bits are carried over the ATM network to communicate the same amount of user data. But what about switch-to-switch performance of the different technologies, as opposed to server-to-backbone connections? How do Unix and non-Intel Corp. platforms (such as Windows NT on Digital Equipment Corp. Alpha servers) compare? How about mixing ATM and Gigabit Ethernet in the same network? All good questions. Stay tuned. Mier is president of Mier Communications, Inc., a network consultancy and product test center based in Princeton, N.J. He can be reached at ed@mier.com or (609) 275-7311. SIDEBAR There's more to squeezing the maximum performance from your network than just data rate. For example, the design and structure of the network interface cards (NIC) also affects total throughput. For most of our ATM testing, we used FORE Systems, Inc.'s PCI NICs and its ASX-1000 switch. When we tried another vendor's ATM-155 NICs, with everything else remaining unchanged, we saw throughput performance differences of as much as 25%. |