SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : AT&T -- Ignore unavailable to you. Want to Upgrade?


To: Frank A. Coluccio who wrote (2485)5/30/1999 1:42:00 PM
From: Probart  Read Replies (2) | Respond to of 4298
 
Long article......
May. 28, 1999 (LTH - CMP via COMTEX) -- A visit to AT&T's corporate offices is a little discordant, maybe even misleading. What's actually taking place at the nation's largest service provider belies the understated atmosphere of its New Jersey campus about 50 miles from New York City. The bucolic site conjures up visions of the way things must have been during AT&T's salad days, before the company was forced to sell its local telephone arms. It evokes images of corporate tranquillity, stability and confidence that the future will look a lot like the past.

The pastoral motif, of course, doesn't jibe with the risky plans being executed in Basking Ridge these days. David Nagel, AT&T's chief technology officer and president of AT&T Labs, is all too aware of this. After all, he's largely responsible for making sure the company's huge gambles pay off.

C. Michael Armstrong, AT&T chairman and chief executive officer, is leading arguably one of the biggest corporate makeovers in history against a backdrop of an industry going through its own revolution, loosely called convergence. In the United States, AT&T's transformation centers on its $100 billion effort to reemerge as a power in local service by buying into the cable industry. AT&T has already nailed the second-largest cable company, Tele-Communications Inc. (TCI), and has a pending deal for the third-largest, MediaOne Group Inc. (Englewood, Colo.). Beyond this, it's working on a range of joint ventures to gain local access with other cable companies, including Time Warner Cable (Stamford, Conn.), and has even agreed to sell a 3 percent interest in itself to Microsoft Corp. Combined, these actions add up to an aggressive move into next-generation networks.

AT&T's makeover doesn't stop in the United States. Internationally, the carrier has established a partnership with BT so deep that both carriers will eventually rely on the platform used for their new joint venture as a model to build their own networks. The international effort also involves partnering with Nippon Telegraph and Telephone Corp. (NTT) and taking a 15 percent stake in Japan Telecom Co. Ltd. (Tokyo), as did its new best friend, BT.

Pulling off the deals is one thing. Getting them to pay off is another matter entirely. That's Nagel's job. Once the dust settles, Nagel and his people must integrate the disparate systems-a daunting task. Essentially, it requires mapping a path from circuit- to packet-switched networks, marrying various operations support systems (OSSs), pushing vendors to create gear to speed the core and nurturing the important home networking industry, while deploying AT&T's push into the local market through fixed wireless technologies.

Nagel may just be the right man for the job. He's steeped in technology, thanks to his past positions as a senior vice president of Apple Computer Inc. (Cupertino, Calif.) and head of human factors research at NASA's Ames Research Center. And beyond his undergraduate and graduate degrees in engineering, he also holds his doctorate in experimental psychology. That can't hurt. "David comes to the job with a good technical background and the business discipline essential for a modern R&D laboratory operating in a fast-moving and highly competitive market," says Peter Cochrane, BT's chief technologist.


---
What are the technical challenges to translating the potential of all the homes you suddenly have access to into revenues? I think the biggest technical challenges are really more in the way of logistics, frankly. It's what we call around here the physics of getting equipment installed to convert, for example, a cable system designed originally for cable television to one designed for more advanced cable television, telephony and data services and a lot of other things beyond that. There is very little invention needed to make all that work.

How important is deep integration of OSSs? People don't call and say, "Gee, we'd like to have all the services today, so please come out and install everything." They call and say, "Gee, I'd like telephony." And then a week later, a month later, a year later, they call and say, "Gee, I heard about the Internet. I'd really like to get a computer and get high-speed data." So you end up rolling a lot of trucks and doing a lot of installation. We're working on a combination of systems and processes that would allow us to do that really efficiently. We're spending 1999 working on the whole business of installing, testing, making sure the services that we can deliver are up to the standards that AT&T represents.

I came from the computing industry. When I got here, I really underestimated the importance of-and really underappreciated the complexity of-the whole back office system and systems for supporting the customer provisioning.

Does latency become a more serious challenge now that last-mile access is faster and people are implementing advanced services? Without question. Once you solve the access network bottleneck, the next big bottleneck is server performance. To put your question another way, when it's masked, you don't know that you're being slowed down by server performance and a lack of caching at the edges of the networks, so no one demands it. But as the access networks get ramped up to provide much higher speed-whether it's wireless or cable or whatever-people are going to say, "I have 10 megabits per second here or 25 or whatever the number is. How come it's as slow as it used to be?" I think there's going to be a lot of pressure, on both the people that make servers and those of us who operate large IP networks, to do the network engineering so that all those bottlenecks get removed as well. So it's a bit of a ratcheting up in the industry.

So there's always going to be a weak link, and it's just going to shift? That's right.

Do you think science can solve these issues? Oh, absolutely. You know, designing a new advanced network for the next century is very different from what we've done in this century. This century, networks have literally been just switching fabric for getting signals from one point to another. In a world of packet switching, you really have to worry about management of the content flows throughout the network. And so in one sense, designing the network is becoming more complicated because you have switching fabric but you also have a distributed computing fabric and a distributed computing information system that you have to overlay or design and is really an integral part of the system.

What's the next step in speeding network performance? You locally cache or put data in the memory of a large high-speed server cluster of some sort right at the edge of the network, and the information gets to the ultimate user a lot faster. That's a fairly simple thing. The next step is to try to figure out how to do dynamic caching in a much more interesting way. But there really are problems beyond that. One of the differences in new networks is that we're going to have a much wider range of devices that are connected. All those things have different characteristics. So not only do you have to cache for purposes of getting information there, you have to proxy, another form of a cache-or vice versa, I guess-so that you can render the information appropriate to the device that you're talking to. It doesn't make any sense to send color, for example, to a device that's only black and white or doesn't have a screen.

So there is a pure bandwidth issue and a network intelligence issue? Is there a way to address both? As soon as everyone starts doing streaming video, wham! We're going to have another big bandwidth problem, so we'll have to figure out ways of making the access networks even faster. Frankly, I don't see an end in my professional lifetime to the bandwidth problem.

The bandwidth problem and the network management problem are inextricably linked in my view. You have to be able to manage the network so that it is extremely reliable. One big difference in data networks-which is, in effect, really the design point for all new next-generation networks-is that they're simply not as reliable as phone networks. And I think people have shown that they've been willing to relax their requirements for brief periods of time, but overall there's a pressure to make these networks more reliable, more available and more predictable. You can't separate the problem of bandwidth management on the one hand and network management and security and all those things on the other hand.

Will you be moving toward one solution? Or are you going to be ecumenical? The short answer is that we're going to be ecumenical, at any instant in time choosing the best way to implement the network. But I'll also say that our design point is IP.

Are you satisfied with the progress they're making on multiprotocol label switching? Yeah. I'm on an advisory committee for the federal government, for President Clinton, and we've spent the last couple of years looking at the state of the art and where the Internet goes and what the technology base is for getting it there and so on. Somewhat of a surprise to me was the sort of consensus on this committee. They're academics and competitors and so on-Vint Cerf is on it-and they believe that the industry does not have the technology that you'd like to have to move confidently into the next century. We think we know how to do it, but there's going to be a fair amount of invention over the next five or 10 years as all the carriers evolve their networks. What people constantly say is, "Oh, not a problem; we can do that very easily." A lot of the new entrants in the telecom business are acting as though everything that's needed to implement 21st-century networks is sort of sitting on the shelf and they've just pieced it all together in interesting ways and so forth. It's nonsense, frankly. We're confident we can do it, but there's a huge amount of work to be done.

Isn't it particularly tricky to give IP networks intelligent network and advanced intelligent network functionality? Well, customers demand it, particularly in the commercial sector. You know, people build businesses based on the assumption that their telephone networks behave in a certain way. And if you show up and say, "Hey, we've got this brand-new technology solution ... by the way, 20 percent of the features that we sold you last year are not going to be in that network, " they'll say, "Well, I think I'll stay with the old solution for a while."

Tell us about the BT relationship. We may have different implementations-probably will have-in certain ways and in certain areas. But we will be architecturally consistent across all three companies, the global venture, AT&T and BT.

What specifics can you give us about the plans? Really, the architecture in a case like this is defined by what we call APIs, application program interfaces. It's sort of an analogy to APIs in the PC world. The PC model, by the way, is sort of inspiring a lot of the design for new networks. If I write a piece of application software for Windows, I consider Windows to be a platform. It's a stable development environment. If I put a subroutine in my application that calls for something that controls the Windows screen, the thing that the user interacts with, I can make the same call and it's going to do the same thing every time. So that's become a very powerful way to develop software efficiently-to have this stable set of subroutines, or APIs, that instruct the system to do something. Think of networks as having the same kind of characteristics, a set of subroutines or set of functions that application builders or service builders can call and the network will behave the same way. So a call in the network world might be to set up a secure virtual circuit from point A to point B for some specified duration of time or set of conditions or whatever, and the network will do that.

In our case, it's security functions, quality of service functions, signaling functions, proxy and cache functions, authentication functions and all those things we think have to be done in the network. Interesting stuff can still be done at the edges of the networks, but the things that are done on the edges will work much better if they can assume that the network behaves in a certain stable way.

So the network is sort of a host? I bought a PalmPilot, and I have to synchronize it. I have two offices-one in California and one here-so I have to synchronize the damn thing. So when my secretary changes my calendar, I have to somehow get that into the PalmPilot. Well, not only do I have two offices, I travel all the time. I just came back from Denver. I actually had my laptop with me, but I didn't have my synchronizing cradle, so I couldn't synchronize my PalmPilot with my PC even though I had both of them there. I don't want to synchronize my PalmPilot with my PC. I want to synchronize both of them to the network, because the network is everywhere, and I don't want to have to take all this junk with me in a bag that makes me look like the Fuller Brush salesman, you know, with all my little adapters and power units and cradles and all that stuff.

What about multichannel multipoint distribution services? MMDS was designed initially as an alternative to cable television, so there's a lot of work that needs to be done there. But the MMDS companies are not doing real well financially, and they're not doing a lot of that kind of investment. So we think in the near term at least, [Project] Angel is a better solution. But we're looking at all the wireless possibilities.

We had heard that one of the issues with Project Angel was that the cost per home was high. Well, $300 to $350 is the target for making it really profitable. That is a nice target. All of this is electronic stuff, so it's going to ride Moore's Law down the next couple of years. So I'm not too worried about the cost. Angel stuff is right in the hunt with HFC and xDSL and all the other things that it competes with. You know, $50 here, $50 there. Those differences will wash out, I think. The big issue is that we're designing our networks to assume there will be many kinds of access networks, physical networks.

Do you plan on using the switches you own through the acquisition of Teleport Communications Group to support your local residential telephony? We sure do. One of the reasons we bought TCG is because they not only have the local switches but they have fiber loops in a lot of the metropolitan areas. Those are also very useful for tying stuff together in a very high-speed, seamless way.


Carl Weinschenk is executive technology editor for tele.com.
He can be reached at cweinsch@cmp.com.



To: Frank A. Coluccio who wrote (2485)6/2/1999 12:13:00 AM
From: Mark Oliver  Read Replies (2) | Respond to of 4298
 
<This would ordinarily make sense, but I recall some flack over this recently, and I think that ATHM's posture (as an extension of T's, or vice versa) towards two-way LAN-extension services will probably show up on their published use policy guidelines as a no-no. Any thoughts on this? >

Frank, my thinking was simply that looking at the subscriber base of MediaOne and saying x number of people divided by y purchase price gives you a idea of valuation on the deal, but I believe the subscriber base is open to large growth as the range of service T is proposing will win back many customers who switched off cable to DSS and there will be a large number of customers wanting internet/telecom service who are not traditional TV customers.

So, your question is will the Internet user, especially a SOHO or even larger business have too great a bandwidth demand for the cable based network. I don't know. I would think it might be very true in the short term, but in the long term the whole viability of this plan is reliant on having a very powerful 2 way network.

I think the upstream link of HFC systems will be enhanced greatly by equipment that is just now being installed. There are Metro DWDM that are especially designed to be in the field sending 8 waves of light back to the head end. The infrastructure of these systems is in great need of build out and that will be a major focus of spending for the next several years. It will be done market by market and it will have to be done in stages. Clearly, T is not there today, but we are all buying the stock on the idea that it will happen in the next several years and this combination of services will be very compelling.

So, yes I believe high speed internet access will be presented to business and that it will start with SOHO and work up.

Lest we forget, T also has a great product of wireless internet service coming down the road to their PCS phone customers. This again is also very interesting.

Regards, Mark

May 31, 1999, Issue: 1162
Section: Communications
--------------------------------------------------------------------------------
Wireless data transfers get a boost
Mark LaPedus

Silicon Valley- Two cell-phone chip rivals, DSP Communications Inc. (DSPC) and Qualcomm Inc., are taking different approaches to increase the speed of wireless data transfers: The companies have separately announced products that enable CDMA handsets to access the Internet, e-mail, and related services over higher-speed wireless networks.

Today, while wireless data services are expensive and thus prohibitive for most users, data-transfer rates for wireless handsets range from a mere 9.6 to 14.4 Kbits/s.

The next-generation wireless standard, called third-generation (3G), aims to boost data rates initially to 384 Kbits/s, and later to 2 Mbits/s. 3G also promises to unify the various digital cellular standards under one umbrella.

At present, two 3G technologies exist: cdma2000 and Wideband-CDMA (W-CDMA). Qualcomm is leading the charge in the cdma2000 camp, while Japan's DoCoMo and others are pushing W-CDMA. DSPC, meanwhile, does not endorse one technology over the other; in theory, its chips should support both cdma2000 and W-CDMA.

Hoping to bridge the gap between today's wireless data services and 3G, San Diego-based Qualcomm has begun shipping its new high-speed packet data (HSPD) software products to CDMA wireless carriers in Japan and Korea.

The HSPD software enables CDMA handsets to receive wireless data at speeds ranging from 64 to 86.4 Kbits/s, according to Johan Lodenius, vice president of marketing for Qualcomm's CDMA Technologies Division, the semiconductor and software arm of the cellular-communications giant.

The HSPD software works in conjunction with handsets built around Qualcomm's CDMA chipset, the MSM3000, Lodenius said.

Not to be outdone, Cupertino, Calif.-based DSPC has announced the D5431, a CDMA chipset designed to support wireless data-transfer rates of up to 115 Kbits/s. "The biggest demand for wireless data is in Japan and Korea now-the U.S. is a little bit behind," said Arnon Kohavi, senior vice president of strategic relations at DSPC.

The new D5431, based on an ARM7 RISC chip core, features advanced voice-recognition capabilities and acoustic echo-cancellation functions. It operates at 2.5 V and provides up to 350 hours of standby time. The chip will begin sampling this quarter, with production slated for the fourth quarter, Kohavi said.

In the face of competing products from LSI Logic, PrairieComm, and VLSI Technology, DSPC and Qualcomm continue to gain momentum in the CDMA-chipset business. In fact, they are the only two CDMA-chipset vendors currently shipping in volume; the other vendors are struggling to deliver their products, sources said.

Samsung, Sony, and other CDMA-handset OEMs have announced aggressive plans to build their own chipsets, thereby lessening their dependence on Qualcomm, their key supplier and handset competitor.

But Qualcomm isn't losing any market share, according to Lodenius. "If anything, we're gaining market share," he said. "Many companies are trying to develop their own chipsets, but they have generally failed. I don't think our competitors can keep up with us in terms of our chipset technology."

Wireless data services are expected to become a big business in the next few years: Allied Business Intelligence Inc., Oyster Bay, N.Y., projects growth from 25.3 million worldwide subscribers in 2000 to 88.6 million by 2006.