latest Tolly group testing for FORE - FORE outperforms all in SVC call setup, an important feature; on other features FORE rated very well.
---------------------------------------------------------------- A feast of functionality ATM switches now offer traffic management, buffering, legacy support and more. Here's your guide to sorting through the options.
By Kevin Tolly and Andrew Hacker Network World, 12/22/97
Now that ATM has been around for more than half a decade, it's finally taking shape. The ATM Forum, with help from the International Telecommunication Union (ITU), has ratified all the standards necessary to let you build scalable ATM nets that support legacy LAN traffic. In total, the forum and ITU have completed nearly 100 ATM standards and have another 30 in the works.
You now need to determine what standards and features each vendor supports and how they fit into a functional ATM network. To that end, The Tolly Group over the past several months has identified and tested more than 50 key functions and specifications that represent a broad spectrum of ATM functionality.
Switches from 3Com Corp., Cisco Systems, Inc., Digital Equipment Corp., FORE Systems, Inc., Hitachi Computer Products, Inc., IBM, Madge Networks, Inc., Olicom, Inc. and Xylan Corp. were subjected to days of functional verification as well as performance benchmark tests. Detailed results of all tests are found on The Tolly Group's Web site. The study is ongoing, and new results are added frequently.
The purpose of this article is not to simply detail who beat whom and by how much - you can go to the Web site for that info - but rather to pass along some of the more important lessons learned from our ATM tests. Even though ATM is, by its nature, multipurpose and able to span LANs and WANs while supporting voice, data and video, there are certain aspects you'll need to master no matter what your intended use. These aspects include signaling, traffic management mechanisms, buffering schemes, legacy LAN support, and bridging and routing features - the items we will explore here.
The cell foundation
The most important strength of ATM lies in its ability to operate entirely at its lowest layer, the cell. Every higher layer function, from Layer 3 routing to LAN integration, can be subcontracted to a stream of cells that run at the speed of the switch fabric, wire speed in all cases.
This distinction is most obvious when you look at routing mechanisms. In traditional networks, a router must route each individual packet. In an ATM net, software with routing intelligence, which can reside on a switch or an external routing device, is used to establish a route from point A to point B. Thereafter all data is packaged in cells and transferred directly to the destination at wire speed.
To find a switch that delivers the best performance, you need to pay attention to the switch fabric and types of interfaces. Switch architecture is really not important. What matters is that the fabric is nonblocking - that all cells can be switched from all ports at line rate without cell loss.
Prior to this year, many vendors that claimed to have fully nonblocking architectures, in fact, did not, as uncovered by previous ATM tests conducted by The Tolly Group. However, all of the switches in this current study have fully nonblocking fabrics.
ATM services
Signaling, the process of setting up a circuit, is the most fundamental process on an ATM switch. The simplest form of signaling is User-to-Network Interface (UNI), which allows an endstation to communicate with a switch.
In addition to UNI, scalable multiswitch ATM networks require support for a Network-to-Network Interface (NNI). ATM's dynamic routing "protocol" is Private NNI (P-NNI) Version 1.0. P-NNI is based on a structure of hierarchical peers and is analogous to current IP routing protocols such as Open Shortest Path First. P-NNI allows switches to route switched virtual circuits (SVC), perform best path calculations and reroute around failed links.
Don't even consider buying a switch that doesn't support UNI (Version 3.0, 3.1 or the latest, 4.0) and P-NNI signaling. All the switches we tested had some form of each. Regardless of what version of UNI you use, as long as you've got switches that utilize P-NNI 1.0 (or even a prestandard version of P-NNI when using switches from a single vendor), implementing signaling is generally as simple as connecting switches to endstations and other switches. Avoid having switches and endstations with dissimilar versions of UNI and P-NNI, because you won't be able to establish a connection. Some of the more sophisticated functions of signaling, such as quality of service and multilevel P-NNI routing, may require more configuration, but many vendors will supply support engineers when implementing larger networks.
This kind of complexity exists in any large router-based network, but contrary to popular belief, ATM isn't inherently more complex in this regard. In fact, in some respects, it's simpler. For example, the 40-byte ATM address includes the equivalent of a media access control (MAC) and an IP address. At a cursory glance, the large address space may be intimidating, but once you get into it, you'll see it makes sense.
Nonetheless, vendors are looking to provide ease-of-use features. For example, 3Com, Digital, FORE, Madge and Olicom are starting to deliver features such as autodetect and UNI 3.X conversion. Autodetect allows the switch to determine what version of UNI an endstation is running. UNI 3.X conversion allows the switch to connect UNI 3.1 stations to UNI 3.0 stations.
UNI and P-NNI are used to set up SVCs between two or more stations and notify each station which virtual circuit to use. Stations then transmit cells at line speed over that virtual circuit. The bulk of signaling activity occurs at SVC setup and release, with only negligible polling in between to ensure an SVC is still active.
That means the most important signaling performance metric is the number of calls a switch can set up and tear down per second. The Tolly Group found the switches fall into three basic categories: less than 30, from 88 to 126, and more than 300 (FORE). Generally, more is better, especially in large, complex networks.
Advanced traffic management
ATM was designed to accommodate all types of traffic using four basic classes of service: constant bit rate (CBR) for traffic with predictable uncompressed voice, variable bit rate (VBR) for unpredictable traffic such as compressed video, available bit rate (ABR) for bursty LAN traffic, and unspecified bit rate (UBR) for all other traffic. An ATM switch allocates its resources by enforcing these class-of-service policies or "contracts.'' If traffic streams violate predefined policies or contracts, cells will either be discarded immediately or marked for eventual discard.
If you're designing a campus ATM net, don't be overly concerned about CBR and VBR, unless you need to support time-sensitive traffic. When designing wide-area nets, where bandwidth is more precious and a wide range of traffic types are the norm, CBR and VBR are extremely important.
Relatively few switches tested - those from Cisco, FORE and Hitachi - support VBR. However, more than half the vendors did support CBR, including FORE, Digital, Hitachi and Xylan. CBR is less complicated than VBR because it deals only with managing peak cell rates. By comparison, VBR enforces peak cell rates, average cell rates and cell delay, and even limits excessive cell bursts.
In our tests, CBR and VBR functioned flawlessly on the switches that offered these classes. The switches discarded any cells that were above the threshold defined by our engineers.
Given that ABR was only recently ratified and isn't widely supported, it wasn't tested for this study, although it will be included in future efforts. Neither was UBR. UBR is essentially a class that traffic fits into if it doesn't belong in some other class and offers no guarantees that those cells will be switched. Therefore, UBR streams are not policed; if a switch needs to discard cells from a UBR stream, it simply does so, leaving little to be learned from testing. Some vendors may eventually provide special schemes to help preserve UBR traffic streams.
Buffering
Buffering serves as a last resort to save cells if traffic management mechanisms fail to keep incoming traffic within the limits of the switch's resources. Considering cells are forwarded over an OC-3 link at a rate of 353,207 per second, even the largest cell buffers will last only fractions of a second before overflowing. You need to make sure sufficient switch resources and adequate service contracts are provisioned to ensure that cells do not reach the warning track that cell buffering provides.
There are several trade-offs when dealing with buffer size. Although large buffers can save more cells - and thus improve throughput - in a congestion situation, they require relatively expensive RAM and can increase cell delay because cells must spend more time in first in, first out queues.
Vendors use a variety of buffering methods and buffer sizes Switches from Digital, FORE, Hitachi, Olicom and Xylan have pools of buffers on each module that are dynamically allocated to ports as needed. Other switches, including those from Cisco, use a combination of dynamic and fixed buffers. Dynamic buffers offer greater flexibility in distributing needed buffers to the most oversubscribed ports.
Switches from FORE, Cisco and Hitachi offer two sophisticated buffer management schemes that can improve buffer efficiency, and in some cases, improve throughput in congestion situations.
The first scheme is intelligent packet discard, which takes into account where a cell comes from before discarding it. If one cell from a LAN packet is discarded, the whole packet likely will have to be retransmitted. Therefore, it's better to discard three cells that all come from the same LAN packet than cells from three different packets. Intelligent packet discard takes such matters into account.
The other scheme is per-VC queuing, which takes a similar tack on a stream level. When shipping cells out of queues, it uses a fairness algorithm to ensure that more cells would be dropped from an OC-3 bit stream than a competing 10M bit/sec stream, for example. Our tests verified that both schemes effectively managed traffic flows as intended.
The upshot is you should look for switches that provide flexible dynamic buffering and a number of buffers sufficient to support the type of traffic you will run. Keep in mind that the more bursty or variable your traffic is, the more buffering you'll want. Any additional buffer management schemes are a definite plus.
Legacy LAN integration
Given its role as a backbone technology for legacy LANs, ATM switches have to support interfaces such as Ethernet, token ring and FDDI. These interfaces can be implemented in modules that plug into the switch or in external devices, such as a LAN switch or router that connects to the switch via an ATM uplink. In either case, LAN Emulation (LANE) software is required to allow communication between native ATM devices and legacy LANs.
There are three basic ATM services that offer backward compatibility for LANs: Classical IP (RFC-1577), LANE 1.0 and Multi-Protocol over ATM (MPOA), a superset of LANE that adds routing support. Because it is not yet widely implemented, MPOA was not tested for this study but will be in the future.
Classical IP and LANE are built around the idea of clients requesting connectivity from logical servers that hold databases of client ATM and LAN address mapping information. A client receives an ATM address for each requested LAN address and uses signaling to make a direct connection to each address. The difference between the two is that Classical IP works only with IP, whereas LANE supports multiple LAN protocols.
These integration services are another entry-level function for an ATM switch. Unless your switch runs in a pure ATM environment, such as in a high-powered WAN backbone, make sure it has LANE.
Consider Classical IP only if your network is entirely IP and the switch you favor does not support LANE. In the tests, the vendors that supported Classical IP were FORE, Olicom, Xylan and Hitachi, although each also supported LANE. There is no real advantage to Classical IP as compared to LANE, although there is one disadvantage - it doesn't offer native ATM routing between IP subnetworks.
LANE 1.0 doesn't offer routing either, although it does provide an upgrade path to MPOA, which supports LAN routing directly over ATM. MPOA incorporates LANE 2.0, which provides a bridging architecture along with functions such as redundancy and distributed services. That means you can have multiple LANE servers backing each other up and you can configure multiple servers to share the load in large networks. So make sure your ATM vendor offers MPOA or plans to in the near future.
LANE provides for encapsulation of 802.3 (Ethernet) and 802.5 (token ring) frames, but choosing which to employ is not a major concern. About the only reason to go with token ring is if you plan to maintain a large source route bridged network, which can't be supported via Ethernet encapsulation. Otherwise, 802.3 LANE can support data from any LAN interface, including FDDI and token ring. And many vendors, including FORE, IBM and Xylan, support 802.3 LANE frame sizes greater than 1,518 bytes, allowing larger token-ring frames to be easily transported over ATM.
In terms of LANE performance, the story is similar to that with SVC setup - it's only an issue at the beginning and end of a session, when PCs are activated and deactivated. LANE services are used to resolve a LAN MAC address to an ATM address. From then on, the endstation directly contacts its destination over a pure ATM link and can transfer data at line rate.
So the issue really comes down to capacity, not throughput: How many clients can a set of LANE services accommodate, and how long do you want to wait to log on in the morning or to reconnect after a failure? Make sure you follow your vendor's recommendations on how many users to assign to a particular emulated LAN (ELAN) or set of LANE services. With LANE 2.0, the issue should be alleviated with the help of redundant and distributed LANE services.
Bridging/routing
Keep in mind that LANE is only a bridging protocol. Like a virtual LAN implemented across multiple Ethernet switches, LANE essentially creates a bridged environment. To communicate between ELANs requires some form of routing.
As described above, MPOA will provide this routing mechanism. But as an interim solution, many vendors have utilized "one-arm" routers. The devices are essentially legacy LAN routers with an ATM interface. All data in this environment must pass through the one-arm device (see graphic, left).
Make sure vendors that do not presently provide standard MPOA or MPOA-like functionality at least provide a one-arm router. In our tests, we successfully verified that the one-arm routing mechanisms worked as advertised in most switches or switch/router combinations.
Of course, using "pure" ATM bridging and routing mechanisms - UNI and P-NNI, respectively - obviates the need for any other kind of routing. But the reverse is not true; MPOA and one-arm routers essentially act as feeders to the pure ATM routing mechanisms, which ultimately handle all traffic.
Although we've only discussed a sampling of the most important ones here, all the standards for fully functional ATM networks are in place, and our testing shows the majority of switch vendors offer corresponding implementations.
Vendors also have heard the knocks on ATM in terms of its complexity and are putting a more friendly face on their products, offering features such as default values for ATM addresses and LANE services. That wasn't the case a few years ago. Our testing didn't dispel the notion that there is still a learning curve involved in understanding the technologies comprising an ATM network, but the inherent wire-speed bridging and routing ATM provides may be well worth it.
SIDEBAR
In an effort to show product differences and provide you with the comprehensive data required to make purchasing decisions, The Tolly Group has applied its Industry Study model to ATM switches.
The Tolly Group evaluated key switch aspects: performance; ATM and LAN interface support; traffic management, including buffering issues and advanced features; signaling and ATM routing; legacy LAN service (LAN Emulation, Classical IP) support; and several management features, including redundancy.
Invitations to participate in the testing were sent to all ATM switch vendors. The participating vendors funded this project.
The complete set of test results is available free worldwide and will be updated during 1998 as additional products are evaluated. |