April 21, 1999, Issue: 2806 Section: Features -- ATM Switches -------------------------------------------------------------------------------- ATM! The Technology That Would Not Die! Christine Zimmerman
Listen to the analysts and it sounds as if ATM is breathing its last breath: "ATM once captured our imaginations, but it'll soon be legacy technology," says Michelle McLean, senior analyst at the Meta Group. What's killing it? Some say its own complexity. Others suggest price. And everyone seems to agree that gigabit Ethernet is driving a stake through the heart of the high-speed switching technology.
Don't look now, but it appears as if ATM is rising from the grave. And it's got a grip on campus backbones, with 700,000 ATM ports shipped in 1998, and over 2 million projected by 2001-adding up to a $2 billion market, according to the Dell'Oro Group (Portola Valley, Calif.). Those are pretty good numbers for a technology that's been dismissed as dead.
Actually, there are several good reasons that ATM won't be giving up the ghost any time soon. Net managers who know the technology say they value its big bandwidth, reliability, and ability to work over the WAN. But above all, they point to its proven quality of service (QOS).
One true believer is Keith Price, system director of network services for Samaritan Health Systems (Phoenix). He's using ATM backbone switches to tie together a 3,000-node campus network linking three hospitals, a corporate center, and several clinics. And he explains that ATM QOS consolidates voice, video, and data onto a single network, helping him trim costs and IT staff. "Gigabit Ethernet may provide the same amount of bandwidth," he says, "but not the same flexibility."
With endorsements like that, it's no wonder ATM won't die-or that all the major vendors offer campus backbone switches. Still, net managers who want similar benefits will have to dig into the switch specifics. Start by scoping out the number and type of ports for ATM and other technologies. Next, see what kind of reliability the vendor has built in. Does the box boast a redundant backplane, power supply, CPU, and cooling fan? Check out the methods it uses to control traffic flows and the spec it uses to hook legacy LANs to ATM. Switches out now work with Lane 1.0 (LAN emulation 1.0), Lane 2.0, or both. But only switches that handle Lane 2.0 furnish full ATM QOS. And since virtual broadcast domains, or ELANs (emulated LANs), let net managers furnish additional security, find out how many the switch can furnish. Get a fix on performance as well. How big is a switch's backplane, and is it truly nonblocking? Are its buffers deep enough to keep cells from being dumped when ports get busy? And what scheme is used to distribute them once a port opens up? Find out how the device is managed-whether it's via RMON, SNMP, telnet, or some proprietary scheme. And finally, consider both the total cost of the switch and the price per port.
Virtual Values
Before delving into the switch details, those who have ignored everything ATM for the past decade or so may want a quick update. The transport can switch all forms of traffic-voice, video, or data-at high speeds while maximizing bandwidth usage. It runs at 25 Mbit/s, OC3 (155 Mbit/s), OC12 (622 Mbit/s), or OC48 (2.488 Gbit/s) over UTP (unshielded twisted pair) copper or singlemode or multimode fiber. ATM also allows network managers to set up SVCs (switched virtual circuits) or PVCs (permanent virtual circuits) with fixed end-points. A single virtual path may support many PVCs and SVCs. Switches from different vendors can work together, thanks to a standard called PNNI (private network-to-network interface).
ATM in the campus offers three main advantages: It scales easily, lets net managers provision fiber over kilometer-long distances, and furnishes proven QOS. "The best thing about ATM for us is that it furnishes the correct amount of bandwidth for telemedicine applications that are beginning to traverse our LAN and WAN networks," says Price.
Net managers who want to experience these benefits for themselves have plenty of products to choose from: 22 campus backbone switches are on the market from 12 vendors. The newest offering in this crowded market is the recently released module for the Cajun M770 switch from Lucent Technologies Inc. (Murray Hill, N.J.). It joins gear from big players like Cisco Systems Inc. (San Jose, Calif.) and smaller vendors like Xylan Corp. (Calabasas, Calif.). All consist of cards for ATM ports and other LAN interfaces that slot into a chassis.
But there are important differences to look into, starting with the number of ports and port speeds. The number of ports ranges from 15, on the Smartswitch 2500 from Cabletron Systems Inc. (Rochester, N.H.), to 256, on the Omniswitch from Xylan. A wide variety of speeds are also available. All the boxes furnish OC3 and OC12, but Cabletron's Smartswitch 9500, Cisco's Catalyst 8540, and the ASX-1200 and ASX-4000 from Fore Systems Inc. (Warrendale, Pa.) deliver OC48 as well. Ethernet speeds are also standard on several switches. Cisco's Catalyst 8540 furnishes 100-Mbit/s ports. The Corebuilder 7000HD from 3Com Corp. (Santa Clara, Calif.), IBM's switch, and boxes from Northern Telecom Ltd. (Nortel, Mississauga, Ontario) pack 10-Mbit/s and 100-Mbit/s ports. And the three Cisco models, the 3Com 7000HD, and Xylan's Omniswitch offer gigabit Ethernet ports.
Ready for Anything
Next up are redundant features that make a switch more reliable on the campus backbone. Redundant power supplies and backplanes mean it shouldn't become a single point of failure, and redundant switch links between ATM boxes can reroute data around a failed line. All ATM switches offer redundant backplanes except Cabletron's Smartswitch 2500, Cisco's Catalyst 8510 and Lightstream 1010, the Collage 750 from Madge Networks Inc. (Setton, U.K.), the CS 3000 from Newbridge Networks Inc. (Kanata, Ontario), and the Crossfire 9100 and 9200 from Olicom Inc. (Richardson, Texas). All have redundant power supplies except the Collage 750 and the Crossfire 9100 and 9200. The Collage 750, the Crossfire 9100 and 9200, Lucent's Cajun M770, and 3Com's Corebuilder 7000HD and 9000 can furnish redundant switch links. Redundant cooling fans are standard on products from Cabletron, Fore, IBM, Lucent, Newbridge, and 3Com.
For even more backup, switches from IBM, Lucent, Madge, NEC America Inc. (Irving, Texas), Olicom, and 3Com pack redundant LAN emulation servers (LESs) and broadcast and unknown servers (BUSs). IBM, Lucent, Madge, NEC, Olicom, and 3Com also offer redundant LAN emulation and configuration servers (LECSs). Basically, the BUS, LES, and LECS let the LECs (Lane clients) on emulated LANs locate one another. They are specified under both Lane 1.0 and Lane 2.0 (see "Rolling Out ATM QOS for Legacy LANs," March 21, 1998; www.data.com/tutorials/qos. html). "Lane makes sure that interface cards or end-users never know they are traveling over ATM; it's all one big LAN to them," says Mark Guinther, director of marketing for Xylan.
Flow control is the next item to investigate. Boxes from Cabletron, Cisco, Fore, IBM, and NEC can apply ER (explicit rate) or RR (relative rate) mechanisms to ABR traffic. Basically, the switch sends a resource management (RM) cell along the path to the source; if it detects congestion, the switch throttles back-either to the rate explicitly specified in the RM cell or to a relative rate determined by network conditions.
Other congestion control techniques available are CLP (cell loss priority), EFCI (explicit forward congestion indication), EPD (early packet discard), and PPD (partial packet discard). CLP designates certain cells for discard if necessary. EFCI adds an indicator in the cell header to notify other switches of congestion.
EPD is another way of dumping cells efficiently. It is aware of the fact that while the switch is dealing with cells, the end-stations are working with frames (if they are on ELANs). The changeover is made during the SAR (segmentation and reassembly) process at the switch. Since every errored frame must be retransmitted, dumping cells from frames that the switch has already started to stream onto the network will only add to the congestion (since those frames will be read as errored and resent). Thus, the switch uses EPD to dump cells from frames that it has not yet started to transmit. PPD is triggered if the congestion does not clear up. This scheme is more drastic than EPD; the switch starts dumping cells from frames it has started to ship. The idea is to relieve congestion at the switch by any means necessary and deal with the requisite retransmissions later.
Almost all campus backbone boxes offer one or two of these techniques. Switches from Cabletron boast all four. NEC's boxes furnish CLP, EFCI, and EPD. Lucent's Cajun M770 packs EPD. And gear from 3Com, Fore, and Cisco works with EFCI.
Next up is QOS-a must-have item for some and ATM's big selling point to most. If they want this capability, net managers will need to know which spec the switch uses to connect Ethernet or token ring desktops transparently over an ATM backbone. If it works with Lane 2.0, it delivers QOS; if works only with Lane 1.0, it doesn't. Lane 1.0, introduced by the ATM Forum in 1995, automatically sets up and tears down SVCs to create ELANs. It sends all ELAN data as unspecified bit rate (UBR), on a first-come, first-served basis.
Lane 2.0, on the other hand, lets network managers assign traffic to three additional service categories besides UBR: ABR, CBR, and VBR. ABR (available bit rate) keeps connections full by constantly monitoring available bandwidth and guaranteeing a certain portion to incoming data streams. CBR (constant bit rate) reserves a fixed amount of bandwidth for individual connections and is commonly used for video transmission. VBR (variable bit rate) locks in an average rate of bandwidth but allows traffic to burst, which is good for compressed voice and data. Lane 2.0 also includes a spec called MPOA (multiprotocol over ATM), which lets ATM handle IP traffic, an important ability in a world increasingly dominated by IP.
So which switches work with which specs? Boxes from Cabletron, Cisco, Fore, IBM, Lucent, Madge, NEC, Newbridge, and Xylan-as well as Nortel's System 5000BH-work with Lane 2.0 and can tap into QOS. Boxes from Olicom and 3Com-as well as Nortel's Centillion 100-can't, since they work with Lane 1.0 only.
A related consideration: How many ELANs can the switch set up? More could definitely be better here. That way, network architects could give each department in a company a separate ELAN with its own server, instead of handling all the company traffic on one broadcast domain. That boosts performance, security, and resiliency. Cisco, Fore, and Newbridge claim their products offer an unlimited number of ELANs. Xylan's switch offers the fewest, at 10.
Bearing the Load
It's also crucial to determine whether a switch is nonblocking, meaning that it has enough capacity to shoot traffic through all ports simultaneously and in full-duplex mode (the ATM default). All vendors claim their products are nonblocking. But net managers will want to verify those claims for themselves. To do so, multiply the maximum number of ports by the fastest allowable port speed. If the result is greater than the internal throughput, the switch is blocking. For example, the maximum speed of Nortel's Centillion 100 is 622 Mbit/s. Multiplied by six, its maximum number of OC12 ports, that yields 3.7 Gbit/s. But the Centillion 100 has an internal throughput of just 3.2 Gbit/s, so it could represent a bottleneck when fully loaded. Nortel's System 5000BH is also blocking. For what it's worth, Tony Tan, senior product marketing manager at Nortel, points out that the cards connected to these boxes' backplanes of can switch traffic themselves, making a blockage less likely.
As far as the nonblocking products go, the Smartswitch 9500 from Cabletron leads the pack with a 75-Gbit/s backplane. Next up are Fore's ASX-4000 and Lucent's Cajun M770, with 40-Gbit/s backplanes; Cisco's Catalyst 8540, with 20 Gbit/s; and 3Com's Corebuilder 9000, with 15 Gbit/s. At the low end of the scale in terms of throughput, Madge's Collage 750 offers 4 Gbit/s and Cabletron's Smartswitch 2500 has 2.5 Gbit/s.
Net managers can also measure performance by checking out the cell buffer size and distribution scheme, which show how switches will cope with network congestion. When ports are congested, switches with built-in buffers won't discard cells, but will hold them in a queue and determine when they are released. Basically, there are two buffering schemes. Dynamic buffering buffers a certain number of cells overall and sends cells to whatever port is free. Distributed buffering lets the net manager configure the switch to send each cell in the queue to a certain port-a more complex, and therefore more expensive, scheme.
Fore's ASX-4000 boasts the largest buffer: It holds over 4 million cells. Lucent's Cajun M770 and NEC's Netnex 8770 each buffer over 2 million cells; the ASX-1000, ASX-1200 and Netnex 8660 buffer over 1 million cells; and Cabletron's Smartswitch 6500 and 9500 and Cisco's Catalyst 8540 accommodate over 500,000 cells. Nortel's Centillion 100 and System 5000BH have the smallest buffers. Each holds about 16,000 cells.
Another way to see who has the best buffering is to check the number of cells buffered per port. 3Com's Corebuilder 9000 leads the way, holding nearly 100,000 cells in reserve per port. Fore, Lucent, and NEC each hold over 60,000. Cabletron, Cisco, Madge, Newbridge, Nortel, and Xylan claim their gear lets net managers set their own per-port buffers. 3Com's Corebuilder 7000HD buffers the smallest number of cells per port: 1,200.
Net architects will also want to assess management for each switch. For half of the ATM campus backbone switches on the shelves, SNMP is the management protocol of choice. RMON and telnet are also offered. Net managers can also tap into several of the switches using proprietary packages: Cabletron's boxes work with its Spectrum ATM Manager, Cisco's work with Ciscoworks 2000, and Xylan's, with its X-Vision software suite.
Dollar Signs
Finally, there's the question of cost. The switches with the highest throughput are generally the most expensive. Cabletron's Smartswitch 9500 at 75 Gbit/s and Fore's ASX-4000 at 40 Gbit/s top the charts at about $425,000 each, fully loaded with OC3 ports and multimode fiber interfaces. But Lucent's Cajun M770, which also blasts traffic at 40 Gbit/s, comes in at $208,950, less than half the price. Also, check out per-port prices, to get another idea of how much bang for the buck these switches deliver. Newbridge's CS 3000 is most expensive, with a per-port price of $10,080 for OC3. Cabletron's Smartswitch 2500 is cheapest, at $735 per OC3 port.
Sound expensive? Vendors admit it is. ATM has higher starting costs, both in equipment and in training, according to Jason Caley, strategy marketing manager for ATM products at Cabletron. And once the net is up and running, companies will have to deal with a technology that's "a heartache to manage," according to McLean of the Meta Group (Stamford, Conn.). What's more, ATM expertise comes at a high price. "You have to pay like six figures to get someone to support ATM networks," adds Orlando Caison, network engineer at engineering consultancy Comdisco Inc. (Rosemont, Ill.).
So even with all these switches offer, would gigabit Ethernet be a better bet in the campus backbone? Analysts argue that's not a valid comparison. "Ethernet and ATM are very different, and comparing their prices is like comparing the price of hamburger and filet mignon," says Mary Petrosky, president of consultancy Petrosky.com (San Mateo, Calif.). "No one would dispute that Ethernet is less expensive than ATM, and it's likely to stay that way," she continues. "Volume drives costs down, and the number of switched Ethernet ports shipped each year is greater than the number of ATM ports shipped."
ATM vendors also like to point out that ATM may save a company money on pipes and management, since it consolidates traffic onto one net. Gigabit Ethernet, by contrast, may force net managers to overprovision bandwidth-and hope that the necessary apps will get through. Mark Leary, director of LANs for consultancy International Data Corp. (IDC, Framingham, Mass.) comments that "gigabit Ethernet's verions of QOS still isn't proven." And observers say that won't do in every situation. "Overprovisioning is fine for basic functions like database and e-mail access, but not for exotic apps like videoconferencing," says Mike McConnell, director of enterprise management and LAN programs for Infonetics Research (San Jose, Calif.).
Still, some net managers seem less than enthusiastic about making the switch to ATM. "I'm not ready to start thinking in cells," admits Comdisco's Caison. Samaritan's Price concedes ATM is "harder to design, implement, and fix," but adds that he has overcome many problems by collaborating with his chosen vendor, 3Com. "We've worked with them to train our staff and tune designs," he adds.
The prospect of making those changes may be daunting, but Greg Ratta, chair of the ATM Forum's technical committee, says ATM needn't be a bear. "MPOA, PNNI, and ELANs have made the technology much more user-friendly." The ATM Forum has also produced a guide with step-by-step instructions for laying out ATM nets. It's available at the Forum's Web site, www.atmforum.com.
Meanwhile, net managers have discovered their own ways to stretch IT dollars while getting ATM's benefits (see "Revving Up With ATM," April 1999; www.data.com/issue/990407/atm.html). Take the case of Julio Ibarra, director of telecommunications at Florida International University (Miami), who is building a campus ATM network with 4,500 nodes furnishing OC3 and 25-Mbit/s ATM to the desktop. "We have found the best way to save money is to have ATM at the core and Ethernet on most desktops," he explains. "We've already made the hardware purchase. That shows our commitment to this technology."
As long as there are net managers who share this passion for ATM, the technology's future remains secure. "Cost, complexity, even gigabit Ethernet is not necessarily the death knell for ATM," says Leary, pointing out that the two technologies can coexist in a network. "I don't think it should be a religious war. Just use what works."
Christine Zimmerman is workgroup networks editor for Data Communications and is based in Chicago. She can be reached at czimmerm@data.com.
Copyright ® 1999 CMP Media Inc. |