<<It was widely assumed that the networking industry would be forced to abandon Ethernet and move instead to a format called ATM>>
[do jags have that fibonacci snap?, or.....]
The Case For ATM
Mary Petrosky
With all the hoopla surrounding gigabit Ethernet, it's easy to forget about the other high-speed networking technology--Asynchronous Transfer Mode (ATM). ATM has had a chance to grow up since products using the technology began shipping in earnest in 1994, and it's maturing nicely. In addition to speed, ATM has a variety of features that make it a smooth performer--and therefore a good choice for server connectivity as well as network backbones.
From a speed perspective, ATM can operate at virtually any speed. It has been defined to operate at 25, 155, and 622Mbps, and most recently at 2.4Gbps. Being a switching technology, ATM can operate in full-duplex mode--that is, ATM switches and network interface cards (NICs) can send and receive data simultaneously, effectively giving you twice the nominal rate. For example, a 622Mbps NIC has a theoretical throughput of 1.2Gbps.
Getting A Gig
To ATM's credit, there's a much smaller gap between theoretical and actual throughput than is currently the case with gigabit Ethernet. And that gap is narrower for UNIX systems than for Windows NT-based hosts.
This past spring, The Tolly Group, an independent testing and consulting firm, tested 622Mbps ATM NICs from Fore Systems. The tests confirmed bi-directional (or full-duplex) throughput in excess of 1Gbps between a UNIX client and a server, using extended (larger than the Ethernet maximum) packet sizes. However, these same NICs could only deliver about half that performance for Windows NT-based machines. Two pairs of machines were tested: a Sun Microsystems Ultra30 workstation connected to an Ultra Enterprise 450 server running Solaris 2.6, and a pair of Digital Equipment Corp. Alpha machines running Windows NT 4.0.
The testing also revealed that using large packet sizes increased application throughput and minimized the NIC's drain on the CPU. ATM supports the use of extended packet sizes through its LAN-emulation capability. (LAN-emulation software resides on end systems and "emulates" the services of traditional LAN technologies, such as Ethernet, allowing existing LAN applications to operate over ATM protocols.)
The Tolly Group tested the NIC's performance using three packet sizes: 1,516 bytes (the maximum size for Ethernet packets), 4,544 bytes, and 9,294 bytes. The Sun machines achieved bi-directional throughput of roughly 420Mbps with the smallest-sized packets, 720Mbps with the medium-sized packets, and just over 1Gbps with the largest packet size. In contrast, the NT 4.0 machines only achieved throughputs of 434, 565, and 575Mbps, respectively, for the three packet sizes. (Fore Systems supports packet sizes up to 18,000 bytes between ATM-attached hosts outfitted with its NICs. However, the optimal packet size seems to be roughly 9,000, since many applications, such as NFS, use 8,000-byte datagrams.)
In these tests, ATM--whether connecting UNIX or NT boxes--comes up smelling like a rose compared to gigabit Ethernet. Vendors of gigabit Ethernet NICs have publicly confirmed their throughput is typically less than 500Mbps, with UNIX systems performing better than NT systems. For example, Alteon Networks claims its gigabit Ethernet NICs can push over 600Mbps of traffic on and off a Solaris-based machine when its proprietary large-packet formats are used. For standard-sized packets, Alteon says it achieves roughly 400Mbps of throughput on a Solaris machine. Another gigabit Ethernet vendor, Packet Engines, is claiming performance in the 400Mbps range for its NICs that support Solaris and Digital UNIX.
In contrast, the highest performance for NT 4.0 was just over 400Mbps using a Packet Engines NIC on a Digital Alpha with a 64-bit PCI bus and purely UDP traffic. Performance dropped to 237Mbps when TCP traffic was tested. This level of performance is typical for most first-generation gigabit Ethernet NICs on the market. Improvements in NIC drivers, and the release of NT 5.0, could change this picture by mid-1999.
Beyond Speed
Although ATM has a clear speed advantage over gigabit Ethernet (at least for the time being), what really sets ATM apart is its quality of service (QoS) and traffic-management features. In broad terms, ATM's QoS has the ability to control bandwidth, latency, and accuracy levels of the traffic crossing the network. For example, a network manager could "split" a 622Mbps ATM server connection into three chunks of 200+Mbps, each to service three different applications. This is possible because ATM is a broadband technology, and data travels across virtual channels.
For timing- or latency-sensitive applications such as videoconferencing, network managers can select a QoS that ensures that latency is kept to a minimum. Likewise, you can specify accuracy in terms of which packets should get thrown away (e-mail packets or video packets that have been delayed too long) and which ones should never be thrown away (banking transactions) when the network experiences congestion. To help limit congestion, ATM also has various traffic management features, including mechanisms for admitting traffic onto the network in a controlled fashion and backing off transmission when congestion occurs.
The main issue with QoS--whether the ATM variety or those mechanisms used in the Ethernet world--is that there has to be some way for desktops and servers to indicate to the network what type of QoS they need for a given application. Fortunately, industry players have developed application programming interfaces (APIs) that let software developers take advantage of a network's QoS features. For UNIX, that API is the X-Open XTI, and for Windows, it's the Windows Socket 2.0 (WinSock 2.0) interface.
Early this year, the ATM Forum finalized a specification that extends the XTI interface. The Native ATM Services Data Link Provider Interface (DLPI) 1.0 specification defines an interface for X/Open-compliant systems, giving UNIX developers access to native ATM features, including signaling, QoS, access and control, and other features. In particular, developers now have the option to build kernel-level streams modules on top of DLPI.
To date, only a few vendors have developed applications that use these APIs to exploit QoS capabilities. Many of these applications are video oriented, such as Oracle's Video Server 3.0 and PictureTel's LiveLAN 3.1 videoconferencing system. Although more applications will exploit these APIs over time, the reality is that many applications may never be rewritten to use QoS features. Fortunately, a few vendors, including Fore Systems, have developed techniques for identifying applications and have given customers management tools to assign QoS levels to specific applications.
In Fore's case, the company's ForeThought 5.0 software in its NICs can identify applications by their source and destination IP address, port (or socket) number, and protocol type. Rather than all traffic being handled in a "best-effort" fashion--getting what in ATM is called Unspecified Bit Rate (UBR) service--Fore allows customers to specify one of three types of QoS service. In addition to UBR service, Fore also supports the Constant Bit Rate (CBR) and Variable Bit Rate (VBR) service categories, allowing customers to assign a given service to a particular application. CBR is useful for applications that need a fixed amount of bandwidth, while VBR is intended for applications that have bursty traffic characteristics and can tolerate delay and delay variation.
In the packet world, a few vendors, including 3Com and Cisco (via its acquisition of Class Data Systems last spring), have developed software that sits at the desktop or server and can identify application traffic. 3Com supports only the Windows environment, while Cisco targets both UNIX and Windows systems. Once they've identified the application traffic, these vendors can give it some QoS treatment. However, the packet-based QoS schemes available today are functionally less rich than what ATM offers.
A Smooth Operator
Many organizations stick with Ethernet and a routed network because it's what they know. Proponents of gigabit Ethernet like to point out that it's simply a faster version of Ethernet, so there's no learning curve in deploying it. That may be true, but history has shown that Ethernet isn't necessarily the best choice for a network backbone, which is why so many network managers have installed Fiber Distributed Data Interface (FDDI) campus backbones. Unlike Ethernet, FDDI has fault-tolerance features. So does ATM.
For one thing, ATM allows you to build arbitrarily large meshed networks, so you can provide redundant connections wherever needed. In addition, ATM's routing scheme, known as Private Network-to-Network Interface (PNNI), offers such features as topology discovery, new switch and link discovery, load balancing, and link failure recovery. Try getting those capabilities out of Ethernet's spanning-tree algorithm, or most routing protocols.
PNNI also provides end-to-end QoS-based routing, factoring in an application's delay and bandwidth requirements in selecting routes through the network. No standard QoS-aware routing algorithm currently exists in the Ethernet market.
While there is a learning curve in deploying ATM for the first time, network managers I've spoken to over the years have consistently said ATM networks are easy to run. I'm not aware of any comprehensive studies comparing the operational costs of router-based Ethernet backbone networks with ATM backbones, but I've run across a few compelling anecdotes about ATM's lower overhead. In one case reported by Ron Jeffries, principal of Jeffries Research, an ISP representative noted that it needed the same number of people to support seven routers as it needed to support 34 ATM switches.
Granted, router operation is more complex on the Internet than in most corporate environments. But you have to stop and ask yourself: in my organization, how many routers does one network staffer support? In light of the endless growth of network traffic, do I want to continually redesign a routed or a switched network?
Certainly gigabit Ethernet is a welcome evolution of Ethernet, and the explosion of fast, affordable layer-3 devices is a boon to enterprises and ISPs alike. But let's face it--Ethernet is a 25-year-old technology, designed for a simpler age. It may be the right technology choice for many organizations, but it's not the only choice. ATM deserves fair consideration.
Mary Petrosky is an independent technology analyst based in San Mateo, CA. She can be reached by phone at 650-572-0560.
performancecomputing.com
|