Long article...... May. 28, 1999 (LTH - CMP via COMTEX) -- A visit to AT&T's corporate offices is a little discordant, maybe even misleading. What's actually taking place at the nation's largest service provider belies the understated atmosphere of its New Jersey campus about 50 miles from New York City. The bucolic site conjures up visions of the way things must have been during AT&T's salad days, before the company was forced to sell its local telephone arms. It evokes images of corporate tranquillity, stability and confidence that the future will look a lot like the past.
The pastoral motif, of course, doesn't jibe with the risky plans being executed in Basking Ridge these days. David Nagel, AT&T's chief technology officer and president of AT&T Labs, is all too aware of this. After all, he's largely responsible for making sure the company's huge gambles pay off.
C. Michael Armstrong, AT&T chairman and chief executive officer, is leading arguably one of the biggest corporate makeovers in history against a backdrop of an industry going through its own revolution, loosely called convergence. In the United States, AT&T's transformation centers on its $100 billion effort to reemerge as a power in local service by buying into the cable industry. AT&T has already nailed the second-largest cable company, Tele-Communications Inc. (TCI), and has a pending deal for the third-largest, MediaOne Group Inc. (Englewood, Colo.). Beyond this, it's working on a range of joint ventures to gain local access with other cable companies, including Time Warner Cable (Stamford, Conn.), and has even agreed to sell a 3 percent interest in itself to Microsoft Corp. Combined, these actions add up to an aggressive move into next-generation networks.
AT&T's makeover doesn't stop in the United States. Internationally, the carrier has established a partnership with BT so deep that both carriers will eventually rely on the platform used for their new joint venture as a model to build their own networks. The international effort also involves partnering with Nippon Telegraph and Telephone Corp. (NTT) and taking a 15 percent stake in Japan Telecom Co. Ltd. (Tokyo), as did its new best friend, BT.
Pulling off the deals is one thing. Getting them to pay off is another matter entirely. That's Nagel's job. Once the dust settles, Nagel and his people must integrate the disparate systems-a daunting task. Essentially, it requires mapping a path from circuit- to packet-switched networks, marrying various operations support systems (OSSs), pushing vendors to create gear to speed the core and nurturing the important home networking industry, while deploying AT&T's push into the local market through fixed wireless technologies.
Nagel may just be the right man for the job. He's steeped in technology, thanks to his past positions as a senior vice president of Apple Computer Inc. (Cupertino, Calif.) and head of human factors research at NASA's Ames Research Center. And beyond his undergraduate and graduate degrees in engineering, he also holds his doctorate in experimental psychology. That can't hurt. "David comes to the job with a good technical background and the business discipline essential for a modern R&D laboratory operating in a fast-moving and highly competitive market," says Peter Cochrane, BT's chief technologist.
--- What are the technical challenges to translating the potential of all the homes you suddenly have access to into revenues? I think the biggest technical challenges are really more in the way of logistics, frankly. It's what we call around here the physics of getting equipment installed to convert, for example, a cable system designed originally for cable television to one designed for more advanced cable television, telephony and data services and a lot of other things beyond that. There is very little invention needed to make all that work.
How important is deep integration of OSSs? People don't call and say, "Gee, we'd like to have all the services today, so please come out and install everything." They call and say, "Gee, I'd like telephony." And then a week later, a month later, a year later, they call and say, "Gee, I heard about the Internet. I'd really like to get a computer and get high-speed data." So you end up rolling a lot of trucks and doing a lot of installation. We're working on a combination of systems and processes that would allow us to do that really efficiently. We're spending 1999 working on the whole business of installing, testing, making sure the services that we can deliver are up to the standards that AT&T represents.
I came from the computing industry. When I got here, I really underestimated the importance of-and really underappreciated the complexity of-the whole back office system and systems for supporting the customer provisioning.
Does latency become a more serious challenge now that last-mile access is faster and people are implementing advanced services? Without question. Once you solve the access network bottleneck, the next big bottleneck is server performance. To put your question another way, when it's masked, you don't know that you're being slowed down by server performance and a lack of caching at the edges of the networks, so no one demands it. But as the access networks get ramped up to provide much higher speed-whether it's wireless or cable or whatever-people are going to say, "I have 10 megabits per second here or 25 or whatever the number is. How come it's as slow as it used to be?" I think there's going to be a lot of pressure, on both the people that make servers and those of us who operate large IP networks, to do the network engineering so that all those bottlenecks get removed as well. So it's a bit of a ratcheting up in the industry.
So there's always going to be a weak link, and it's just going to shift? That's right.
Do you think science can solve these issues? Oh, absolutely. You know, designing a new advanced network for the next century is very different from what we've done in this century. This century, networks have literally been just switching fabric for getting signals from one point to another. In a world of packet switching, you really have to worry about management of the content flows throughout the network. And so in one sense, designing the network is becoming more complicated because you have switching fabric but you also have a distributed computing fabric and a distributed computing information system that you have to overlay or design and is really an integral part of the system.
What's the next step in speeding network performance? You locally cache or put data in the memory of a large high-speed server cluster of some sort right at the edge of the network, and the information gets to the ultimate user a lot faster. That's a fairly simple thing. The next step is to try to figure out how to do dynamic caching in a much more interesting way. But there really are problems beyond that. One of the differences in new networks is that we're going to have a much wider range of devices that are connected. All those things have different characteristics. So not only do you have to cache for purposes of getting information there, you have to proxy, another form of a cache-or vice versa, I guess-so that you can render the information appropriate to the device that you're talking to. It doesn't make any sense to send color, for example, to a device that's only black and white or doesn't have a screen.
So there is a pure bandwidth issue and a network intelligence issue? Is there a way to address both? As soon as everyone starts doing streaming video, wham! We're going to have another big bandwidth problem, so we'll have to figure out ways of making the access networks even faster. Frankly, I don't see an end in my professional lifetime to the bandwidth problem.
The bandwidth problem and the network management problem are inextricably linked in my view. You have to be able to manage the network so that it is extremely reliable. One big difference in data networks-which is, in effect, really the design point for all new next-generation networks-is that they're simply not as reliable as phone networks. And I think people have shown that they've been willing to relax their requirements for brief periods of time, but overall there's a pressure to make these networks more reliable, more available and more predictable. You can't separate the problem of bandwidth management on the one hand and network management and security and all those things on the other hand.
Will you be moving toward one solution? Or are you going to be ecumenical? The short answer is that we're going to be ecumenical, at any instant in time choosing the best way to implement the network. But I'll also say that our design point is IP.
Are you satisfied with the progress they're making on multiprotocol label switching? Yeah. I'm on an advisory committee for the federal government, for President Clinton, and we've spent the last couple of years looking at the state of the art and where the Internet goes and what the technology base is for getting it there and so on. Somewhat of a surprise to me was the sort of consensus on this committee. They're academics and competitors and so on-Vint Cerf is on it-and they believe that the industry does not have the technology that you'd like to have to move confidently into the next century. We think we know how to do it, but there's going to be a fair amount of invention over the next five or 10 years as all the carriers evolve their networks. What people constantly say is, "Oh, not a problem; we can do that very easily." A lot of the new entrants in the telecom business are acting as though everything that's needed to implement 21st-century networks is sort of sitting on the shelf and they've just pieced it all together in interesting ways and so forth. It's nonsense, frankly. We're confident we can do it, but there's a huge amount of work to be done.
Isn't it particularly tricky to give IP networks intelligent network and advanced intelligent network functionality? Well, customers demand it, particularly in the commercial sector. You know, people build businesses based on the assumption that their telephone networks behave in a certain way. And if you show up and say, "Hey, we've got this brand-new technology solution ... by the way, 20 percent of the features that we sold you last year are not going to be in that network, " they'll say, "Well, I think I'll stay with the old solution for a while."
Tell us about the BT relationship. We may have different implementations-probably will have-in certain ways and in certain areas. But we will be architecturally consistent across all three companies, the global venture, AT&T and BT.
What specifics can you give us about the plans? Really, the architecture in a case like this is defined by what we call APIs, application program interfaces. It's sort of an analogy to APIs in the PC world. The PC model, by the way, is sort of inspiring a lot of the design for new networks. If I write a piece of application software for Windows, I consider Windows to be a platform. It's a stable development environment. If I put a subroutine in my application that calls for something that controls the Windows screen, the thing that the user interacts with, I can make the same call and it's going to do the same thing every time. So that's become a very powerful way to develop software efficiently-to have this stable set of subroutines, or APIs, that instruct the system to do something. Think of networks as having the same kind of characteristics, a set of subroutines or set of functions that application builders or service builders can call and the network will behave the same way. So a call in the network world might be to set up a secure virtual circuit from point A to point B for some specified duration of time or set of conditions or whatever, and the network will do that.
In our case, it's security functions, quality of service functions, signaling functions, proxy and cache functions, authentication functions and all those things we think have to be done in the network. Interesting stuff can still be done at the edges of the networks, but the things that are done on the edges will work much better if they can assume that the network behaves in a certain stable way.
So the network is sort of a host? I bought a PalmPilot, and I have to synchronize it. I have two offices-one in California and one here-so I have to synchronize the damn thing. So when my secretary changes my calendar, I have to somehow get that into the PalmPilot. Well, not only do I have two offices, I travel all the time. I just came back from Denver. I actually had my laptop with me, but I didn't have my synchronizing cradle, so I couldn't synchronize my PalmPilot with my PC even though I had both of them there. I don't want to synchronize my PalmPilot with my PC. I want to synchronize both of them to the network, because the network is everywhere, and I don't want to have to take all this junk with me in a bag that makes me look like the Fuller Brush salesman, you know, with all my little adapters and power units and cradles and all that stuff.
What about multichannel multipoint distribution services? MMDS was designed initially as an alternative to cable television, so there's a lot of work that needs to be done there. But the MMDS companies are not doing real well financially, and they're not doing a lot of that kind of investment. So we think in the near term at least, [Project] Angel is a better solution. But we're looking at all the wireless possibilities.
We had heard that one of the issues with Project Angel was that the cost per home was high. Well, $300 to $350 is the target for making it really profitable. That is a nice target. All of this is electronic stuff, so it's going to ride Moore's Law down the next couple of years. So I'm not too worried about the cost. Angel stuff is right in the hunt with HFC and xDSL and all the other things that it competes with. You know, $50 here, $50 there. Those differences will wash out, I think. The big issue is that we're designing our networks to assume there will be many kinds of access networks, physical networks.
Do you plan on using the switches you own through the acquisition of Teleport Communications Group to support your local residential telephony? We sure do. One of the reasons we bought TCG is because they not only have the local switches but they have fiber loops in a lot of the metropolitan areas. Those are also very useful for tying stuff together in a very high-speed, seamless way.
Carl Weinschenk is executive technology editor for tele.com. He can be reached at cweinsch@cmp.com. |