oops here is page one of the optical primer...
New pipelines promise unprecedented speed by Jeff Hecht August 25, 2000
By the end of this year, Nortel Networks Corp. (NT) plans to begin selling a system that will send a staggering 1.6 trillion bits per second through a single fiber. That's equivalent to more than 20 million simultaneous phone calls.
You could put everyone in Australia on one end of a pair of those fibers and everyone in Austria and Hungary on the other, and have phone lines left over.
Yet, the network isn't pipes alone. You need switches to manage information flow. Dump your dial-up modem for high-speed Internet access, and you learn the importance of switches quickly. Straight downloads of big files zip through a cable modem or digital subscriber line (DSL) at amazing speed; it's a simple transfer from point to point.
But there's plenty of time to twiddle your thumbs while your computer collects pieces of a busy Web page, one by one, from a slow server.
Electronic switching isn't keeping up with the pace of light. "The optical capacity per fiber is doubling every year," says Alastair Glass, photonics research vice president at Lucent Technologies Inc.'s (LU) Bell Labs. That beats Moore's law hands down, so engineers are turning to optics to manage information flow.
Integrating these developing optical technologies with fiber-optic transmission promises to break electronic bottlenecks on the information superhighway and create a global optical network.
"The key really is adding intelligence to the network; the capacity is already there," says Ciena Corp. (CIEN) spokesman Aaron Graham.
Combining new and established technologies
Fiber-optic transmission has a proven track record as a signal pipeline. It got a slow start after Chinese-born engineer Charles Kao proposed the idea in 1966 at Standard Telecommunications Laboratory in Harlow, England (now part of Nortel Networks), and Corning (GLW) made the first communication-grade fibers in 1970.
Innovation proceeded slowly in the private and government monopolies that ran telephone systems in those days. After years of cautious tests, they finally approved the first fiber system, for obscure links called "interoffice trunks," which run several miles between local switching centers deep in the bowels of the network.
Fiber transmission was standardized by AT&T (T) at 45 million bits per second, which is equivalent to 672 telephone lines. Even in the late 1970s, that wasn't great, so when the telephone giant decided to build a showcase high-speed system from Boston to Washington, it planned to send three separate signals at different wavelengths through each fiber. It was a bold but ill-fated idea.
By the time AT&T started construction, British tele communications (BTY) and Nippon Telegraph and Telephone Corp. (NTT) had shown that new "single mode" fibers could carry signals 10 times faster than the older type, as well as span 20-mile distances. That made them look ideal for long-distance, but top AT&T management cautiously stuck with the older fibers.
In late 1982, the upstart MCI (WCOM) shocked the industry by picking single-mode fibers for its new American long-distance network. Sprint (FON) followed suit, and soon AT&T also changed course.
Engineers built the backbone of the global long-distance network with glass in the years that followed. The first submarine fiber cables crossed the Atlantic in 1988 and the Pacific in 1989. Transmission speeds through single fibers rose steadily to 2.5 billion bits (2.5 gigabits) per second.
Yet, as the industry matured, two revolutions were quietly brewing: the optical amplifier and the Internet. The first would change the face of fiber optics and open the door to optical networking. The second would drive the soaring demand for bandwidth, creating a market for optical networking.
Point to point and beyond
Transmission speeds in commercial fiber systems had increased by more than a factor of 50 by the early 1990s, but the fundamental architecture remained the same. Fibers ran from one electronic box to another. A laser transmitter pumped pulses down fibers made of exquisitely pure glass. On the other end, a receiver converted the light signals back into electrical form, and electronics processed or amplified the signals.
Fibers were big pipes, but electronics were the switches -- and just about everything else -- in the telecommunications system.
Modern telecommunications systems rely on the economies of scale: It costs less to transmit and process many signals if they're combined into a single stream of information.
Electronics do the combining, called multiplexing, in a series of steps. For standard telephone service, they digitize voice signals from 24 phone lines and merge them into one signal at 1.55 million bits per second.
Typically, the next step interleaves 28 of these 1.55-megabit signals to make a 45-megabit signal. Further steps make faster signals and send them through fibers.
At the fiber output, other electronics process each signal. They typically demultiplex it, breaking it into component parts for redirection, and often combine these parts with pieces of other signals and send them through another fiber. Sometimes they amplify and regenerate the signal, allowing it to pass essentially unchanged through another length of fiber.
Traditionally, the only optics were the laser and a length of glass, with an optoelectronic detector in the middle. They only carried signals from point to point.
The emerging optical network is spreading optics outward from the heart of the network to perform more and more system functions. The optical amplifier was the first step on that road. In many ways, the idea of optical amplification was obvious. A laser amplifies light, and the amplified signal is precisely in phase with the input -- exactly what's needed for communication. Converting an optical signal to electronic form for amplification is cumbersome in comparison, and its complexity adds to costs and risks of failure. Yet, repeaters also can regenerate the original signal, stripping away noise that accumulates during transmission.
More importantly, nobody found the right material for optical amplifiers until the late 1980s, when David Payne, a fiber researcher at the University of Southampton in England, doped the light-carrying cores of optical fibers with an element called erbium. Erbium was not only a very good optical amplifier, it worked the infrared wavelengths where glass fibers are clearest.
Developing practical fiber amplifiers took a few years, but they expanded the optical domain in the heart of the network by stretching the distance signals could travel as light before they saw another electron.
Wavelength-division multiplexing
Erbium-fiber amplifiers, in turn, opened another door. They could amplify signals across a range of wavelengths -- initially from 1,530 to 1,565 nanometers, and now up to about 1,620 nanometers. If the input fiber carried signals on two wavelength channels, erbium could amplify both without scrambling them.
Research engineers had been playing with multi-wavelength systems for years, but repeaters had been showstoppers. Every wavelength had to be separated from the others and run through a separate repeater, sending costs skyward.
Wavelength-division multiplexing had a powerful allure because it multiplied the number of channels a fiber could carry. Many wavelength channels can share a fiber without scrambling each other, like the many radio stations and television channels that share the broadcast radio spectrum. Developers tried optical amplifiers for four wavelengths, and they worked.
Eight followed, then 16. Channel counts rose as engineers sliced the erbium-amplifier spectrum into smaller slices. Plain wavelength-division multiplexing (WDM) quickly became "dense-WDM," or DWDM.
The International Telecommunications Union (ITU) sliced the erbium-fiber range into a grid of standard wavelengths about 0.8 nanometers apart. (ITU engineers actually specified the grid in equivalent frequency units, like specifying a microwave as having a 10-gigahertz frequency instead of a 3-centimeter wavelength.)
That didn't stop system developers from paring the spectrum even thinner, into 0.4-nanometer slivers, packing 80 slots into the main erbium-amplifier band. They have now opened a second erbium-amplifier band, stretching from about 1,570 to 1,610 nanometers, which offers another 80 wavelength slots.
It's a whole new way to organize telecommunications. Start by stacking bits together at faster and faster speeds, until you can't make the electronics go any faster. Then assign each high-speed signal to a separate wavelength channel.
Optics become the way to organize the signals feeding into the new, fatter DWDM pipes, the highest denomination in the bandwidth sweepstakes.
Practical limits do exist. The more bits an optical channel carries, the bigger the slice of the spectrum it demands. In today's systems, 2.5-gigabit signals can fit into 0.4-nanometer(50-gigahertz) slots, but 10-gigabit signals typically get 0.8-nanometer slots.
Nortel's new system squeezes 10-gigabit signals into 0.4-nanometer slots by sending alternating wavelength channels in opposite directions to minimize cross talk. "Hero experiment" teams can transmit 40 and 80 gigabits per optical channel in the lab, but those signals require larger slices of the spectrum and can't travel as far.
The usable spectrum is expanding; fibers can carry high-speed signals from about 1,260 to 1,650 nanometers. Transmission distances are limited to tens of miles outside the erbium-amplifier range, but new amplifiers in development may change that.
"Fiber itself can hold 50 terahertz band width, so you can imagine 50 terabits per second of information," says Glass of Bell Labs. Put half the people in the world at each end of a fiber-optic cable operating at that speed, hand everybody a telephone, and it would take only a half-dozen fibers to let everybody talk at once.
How close we can come to that theoretical limit remains to be seen.
Surging Internet growth
Without the Internet, the ultimate fiber bandwidth would be little more important than how many angels could dance on the head of a pin. Telephone-traffic growth is healthy but can be measured in percentage points per year, while the explosive growth of Internet traffic is forcing carriers to unprecedented expansion.
"Some of our customers are doubling every four months," says Vivian Hudson, vice president of high-capacity optical networks for Nortel Networks. "Not just one, but several of our very large customers."
The number of Internet users continues to grow, but their bandwidth usage is growing even faster. Internet bandwidth is an addictive drug; like closet space, you can never have enough. Home computer links have risen from 300 bits per second on early dial-up modems to around a megabit on cable modems and DSL.
Fancy Web graphics, music downloads, streaming audio and video, and bloated software fill the new bandwidth faster than household clutter crams closets. Few people step back down the ladder voluntarily.
Telecommunications carriers had planned for growth, laying cables containing extra-"dark" fibers to provide extra future capacity. Extra fibers are cheap compared to installing new cables, and costly transmitters and receivers aren't added until they're needed.
But the carriers had planned on telephone-style growth, not the Internet explosion, and their reserve quickly eroded. Last year, a KMI Corp. analysis of Federal Communications Commission data showed that Sprint had lit 85 percent of its long-distance fibers, while AT&T had lit 50 percent.
Fiber exhaust is a real threat in a tightly competitive market. Laying new cables takes time and money. DWDM hardware is expensive, but it's cheap and easy compared to digging up city streets or running new cables hundreds of miles.
"It's penetrated in the large backbone networks first because it has huge physical savings," says Joe Gansert, director of transport and network architecture for regional phone company Bell Atlantic (BEL). "We're just beginning to introduce optical-wave multiplexing in our metropolitan areas."
In a sense, DWDM channels are merely virtual fibers, a cheaper way to add capacity than running new physical fibers through new holes in the ground. Yet, by multiplying capacity, they have increased the growing problem of traffic jams at off-ramps from the optical superhighway.
Signals must be processed and redirected, like passengers stopping at a hub airport en route to a final destination. DWDM replaces commuter planes with jumbo jets, overloading electronic switches and routers -- the traffic cops that point signals to their destinations. At the same time, DWDM systems provide a new level of organization -- the wavelength channel -- that can help speed traffic flow.
Suppose each incoming jumbo jet contains modules filled with passengers headed for different destinations. Instead of hundreds of passengers individually scurrying between flights, the airport could move whole modules to outgoing flights. Transferring blocks of people would save on airport facilities and staff.
Switching entire wavelength channels in a block would do the same thing in a telecommunication system, without losing your luggage. It's a logical extension of the ideas that created the older hierarchy of digital-data rates, which packaged together signals headed to the same general destination.
The new packages are streams of optical data at 2.5 or 10 gigabits per second, often called "lambdas" -- from the Greek letter lambda (l), which is the optical engineer's shorthand for "wavelength." They form a new layer in the telecommunications system.
Just as an airport would need new equipment to switch passenger modules, the telecommunication network needs new equipment to direct wavelength channels. It's possible to handle these high-speed optical channels electronically, but it's more appealing to process them optically because they're already in the form of light.
Optical switches could manipulate dozens of wavelengths as they come off the fiber, sending each one its separate way.
All-optical networks are "the Holy Grail," says Gansert of Bell Atlantic. "One of the most significant costs is the continuous conversion from optical to electronic form any time you need to drop circuits" or redirect optical signals, Gansert says.
Today, their huge capacity is needed only in the core of the network, but as traffic continues to increase, more and more of the network will handle such volumes of data. Indeed, optical switches can function at an even higher level, by redirecting a whole fiber's worth of lambdas -- dozens of wavelengths at a time -- instead of just individual channels.
These capabilities open new business opportunities, as well as enhancing network performance. Companies such as Metromedia Fiber Networks Inc. (MFNX) now install cable systems and lease individual fibers to customers who install their own transmission hardware. An affiliated company plans to lease wavelength channels as virtual fibers.
Instead of forcing customers to package their signals in the carrier's chosen format -- as in current telephone leased lines -- leased wavelengths can handle whatever format the customer wants, up to a maximum speed.
Technological wizardry
Realizing these goals is going to take some serious technological wizardry. True optical switches are neither as magical nor as mythical as the Holy Grail, but the technology is at an early stage.
Simple optical switches have been available for years but are limited to such niches as diverting signals to backup fibers in case of failures. The emerging optical network demands switches that are faster, easier to control and less sensitive to wavelength. That demand has stimulated a rush of new ideas.
There is "a lot of innovation that's absolutely been fueled by the VC phenomenon," says Hudson of Nortel. Good technologies carry a hefty premium in the market, and developers are already cashing in. Nortel paid $3.25 billion for Xros Inc. after the startup's switching technology "matured more rapidly than we expected," says Hudson.
Presently, networks contain only rudimentary protection switches, which redirect a fiber's worth of signals if the network detects a cable break -- "backhoe fade" to industry insiders.
The emerging optical network requires more capable switches. One important goal is to drop wavelength channels at intermediate points, such as Sacramento on a route from San Francisco to Salt Lake City. Packaging everything headed to Sacramento in one optical channel avoids the need for expensive electronics to break down the whole high-speed bit stream.
Instead, optics can pick off one wavelength without touching the optical channels going through the node. Since that leaves an open slot, the add/drop can insert signals from Sacramento to Salt Lake City in the same wavelength slot.
A single fixed optical element can pick off one wavelength while transmitting the rest. However, carriers really want "the ability to virtually instantaneously apply bandwidth to where it's needed," says Fred Harris, vice president of design applications and services for Sprint.
That's trickier, but is possible by combining optics that separate wavelength channels with remotely controlled optical switches.
Large urban switching centers require full-scale optical cross-connects, which can direct signals from any input port to any available output port. They serve the same function as a local telephone switch that can connect any two phone lines in town, but optical channels carry a lot more traffic. A switch with 100 input optical channels, at 10 gigabits each, will process a terabit -- a trillion bits -- per second.
Today's cross-connects are built around digital electronics. Incoming optical signals must be converted to electronic format. Tomorrow's optical cross-connects are emerging from development at companies like Nortel's subsidiary Xros and Lucent, built around a new generation of optical-switching hardware.
Arguably the hottest of the new switching concepts consists of arrays of tiny mirrors that move back and forth to redirect light, called micro-electromechanical systems, or MEMS. Mechanical motion may sound old, but MEMS have a decidedly new twist.
Standard semiconductor processing techniques etch planar arrays of the tiny components from silicon substrates. Texas Instruments (TXN) pioneered the technology for displays, but other companies are adapting it for optical switching.
Both Xros and Lucent use mirrors that tilt back and forth on two axes to reflect light precisely from input to output ports. The etching process leaves the mirror elements suspended on tiny posts above the remaining substrate. Voltages applied to circuits on the substrate pull on sides of the mirror, tilting it at the desired angle to scan a reflected beam across output ports.
State-of-the-art devices boast hundreds of input and output ports, and Xros says its design can handle up to 1,152 ports. Folded optical paths reduce the number of mirror elements required.
A different design uses pop-up mirrors, sometimes called "digital MEMS" because each mirror has only two possible positions. The mirror either lies flat along the surface or pops up and latches in place sticking out, so light beams aimed parallel to the surface pass by or bounce to an output port.
This approach is attractive because it's less likely to misdirect a beam, but the mechanisms are complex and the number required increases by the square of the number of ports.
Another hot new switch concept is switching light signals between two intersecting sets of parallel waveguides embedded in a solid substrate. Like the cores of optical fibers, the waveguide layers have a higher refractive index than the surrounding material, trapping the light in the linear guides.
A set of tiny liquid-filled holes passes through the inter- secting points. As long as liquid fills the intersections, the light passes straight through, but moving a bubble into the junction point changes the refractive index and reflects light into the intersecting waveguide.
Agilent Technologies Inc. (A) and Nippon Telegraph and Telephone separately developed switches based on this concept.
Should switches be transparent?
Light signals go straight through both MEMS and microbubble switches, making them optically "transparent." Many developers think that's an advance over optoelectronic switches, called "opaque" because they turn the light into electrical signals for processing before regenerating an optical signal.
Many network elements that convert between optical and electronic formats are designed for specific data rates, so they may have to be replaced if the transmission formats are changed.
Transparent systems are much less sensitive because they pass the light through.
On the other hand, "opaque" electronics remain the only practical way to perform some vital functions. Electronics can clean up and regenerate signals in their original format, eliminating noise, cross talk and other distortions.
Likewise, shifting a signal from one wavelength to another now requires electronics to drive another laser transmitter operating at the second wavelength. All-optical regenerators and wavelength converters are still in the early stages of laboratory development, and questions remain about their practicality.
Wavelength conversion is "not viewed as critical in the first generation of optical networks," says Glass of Bell Labs. However, it could become important as future growth fills available optical channel slots.
Generally, long-distance signals make multiple hops, so a signal might go through Philadelphia, Pittsburgh, Cincinnati and Indianapolis on its way from New York to Chicago. Ideally, wavelength channels could be routed independently, but restricting them to the same wavelength throughout the system could become horribly inefficient, like only being able to make an airline connection if the same seat was available on both planes.
Optoelectronic wavelength conversion would make the system opaque, but it may be the most practical approach.
A host of support technologies also need to be developed, and some are attracting big bucks from investors. The standard lasers used in current systems emit light at a fixed wavelength, so separate inventories are needed for each wavelength channel.
Imagine the logistic nightmares of having to keep spares for each of 160 wavelength channels at every node in an optical network.
Lasers that could be tuned to any desired wavelength would simplify logistics, as well as serve at the output stages for tunable wavelength converters that could generate any optical channel on demand.
The technology looks so good to Nortel that it agreed to pay up to $1.43 billion for CoreTek in March, just weeks after the startup announced a hot new tunable laser.
Managing packets and circuits
As hardware vendors battle over components, system suppliers and carriers are focusing on network management. The issue is not just building better optical traffic cops, but telling them how to manage optical signals.
That's particularly troublesome because the Internet and the telephone system organize their traffic in fundamentally different ways.
Telephone traffic is connection-oriented; the network maintains a connection between you and the person on the other end of the line throughout the call.
The connection from your home is a pair of copper wires. On the national network, it's a guarantee that you need 56,000 slots per second to carry your digitized voice. Lose the connection, and you have to call again.
The Internet bundles data together in electronic packets, puts a header on each packet, then sends the packets out over the Net. The header gives the destination; electronic devices called routers read the header and decide on the best route to send the data. Then they add it to a stream of other data bits following the same path.
There's no guarantee of slots; the data may have to sit around until room opens up.
Data packets follow the same routine at each of a series of routers as they bounce through the network.
Response times vary, and sometimes data packets vanish altogether.
Circuit switching is optimized for conversations; the circuit always provides the capacity you need to talk. It's ready the instant you want it, but it's also ready when you're figuring out what to say, and that's inefficient.
Packet switching is optimized for speeding computer data on its way. It doesn't hold empty slots, so it is much more efficient in filling the available communications capacity; this efficiency, however, comes at a cost of signal quality and timing. New protocols are supposed to change this, but quality remains an issue with present systems.
Some companies are pushing transmission of telephone traffic in Internet protocol format, but traditional telephone companies are resisting. "From an engineering perspective, in order to make voice work well, you have to have a connection-oriented protocol," says Fred Harris of Sprint.
Developing management tools for the optical network also poses a big challenge. Major carriers insist on the ability to monitor their electronic networks, and they expect the same from optics. Engineers have learned Murphy's Law rules in tele- communications.
"Things never work the way they're designed to work," says Harris. He wants to be able to check everything. "I still have to be able to get in to see if the bits are flowing through correctly. Optical networking today does not do that, not yet."
Carriers also want new capabilities from the optical networks. High on the list is more internal intelligence, for rapid provisioning of new services and management of changing loads from control centers.
Existing electronic switches are limited to reacting to component failures. Some companies now plan the circuit routes for new long-distance service manually on yellow sticky notes, says Graham of Ciena.
Opportunity and challenge
Optical networking is making great advances, but like wavelength-division multiplexing, it will not appear overnight throughout the entire network. The telecommunications industry adds capacity when and where it needs it, often overbuilding to begin with and then growing into it.
Established telephone companies have to cope with networks full of legacy equipment that they or their customers can't afford -- or don't want -- to junk. Junior may have a mobile phone, but Grandpa still likes the heft of his vintage rotary phone in Western Electric basic black.
Even if a phone company decides to upgrade all its own equipment, the change takes time, and the company can't shut down for the duration, leaving its customers in the lurch.
"Optical networking will start at the backbone of the global telecommunication system and proceed outward, like wavelength-division multiplexing," says Jeff Gwynne, co-founder and vice president of marketing at Quantum Bridge.
WDM has moved from long-distance systems to the "metro" market, the regional networks that serve large urban areas over tens of miles.
Quantum Bridge is working on different types of optical networks designed to serve the access market, initially linking businesses to phone companies and the Internet. "Eventually, fiber will go to the home -- sooner than people think," says Glass of Bell Labs.
While the opportunities are great, so are the challenges. Despite progress in many areas, key components don't exist outside the laboratory. Tunable lasers are vital to control costs and simplify network management, but the technology is still in its infancy. Optical-wavelength converters and regenerators are even less developed.
Companies are still thrashing out what they want. Some carriers talk about leasing wavelengths to customers, who could put any signal format they want on the optical channel, even if it falls far short of maximum capacity. Others think this scheme is far too inefficient to be practical. (cont) |