SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : SDL, Inc. [Nasdaq: SDLI]

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: kalicokatt who wrote (514)11/8/1999 4:25:00 AM
From: pat mudge  Read Replies (2) of 3951
 
I haven't written a summary of NGN, but I have transcribed the finest Keynote of the week, given by James Crowe, CEO of Level3 Communications.

When you hear what he has to say, you'll know why SDLI and JDSU are such fantastic investments.

<<<

John mentioned I'd spent time as chairman of WorldCom and I thought we'd just stop and take a minute --- take a poll to see how many ex-chairmen were in the audience. Maybe get my bearings.

John's quite right I'm going to spend the next 30 - 45 minutes talking about a combination of two subjects. I know you're interested in the technology, the pace of change, the acceleration of the pace of change -- and the various protocols we all deal with. I'm going to make an attempt. An unnatural attempt to combine it with economics. A subject we most of us avoided in college. I believe you cannot understand where we're going, in the short term, without a sense of the economics that are bundled inextricably with the technical trends we're experiencing.

I'm going to make these points in four ways. First nature of network and why I firmly believe. Not only are we going to see an abundance. But that term abundance has no meaning without talking about demand at the same time. From my perspective the two keys to understanding what's going on are the rate of change and price and the cost and the result in increase in demand. Those are the two sides of the coin. And how can you use the term abundance or shortage without giving some sense of the interrelationship between price and demand. Price elasticity as you'll remember from your Economics 101. And I contend that bandwidth is strongly price elastic. So there is a natural balance between supply and demand. As long as prices drop demand will go up more quickly and I'll have something to say about that in the context of what John called silicon economics.

From my perspective Silicon Economics is about two things. Yes, it's about the pace of cost reduction --- the rate of unit costs drop --- but it's also about the rate at which demand increases. So it's combination of those two. Finally I'm going to talk about voice in the development of the next generation network. And that's because voice plays an unusual role in networking. After a 100 years of monopoly and a percentage of revenues in networking available from that one service. Perhaps it shouldn't be that way but it is and we all need to deal with it. And we'll talk about how we think it needs to be dealt with.

Okay, what do you need when you build a network? Next generation, traditional, or whatever. Well, there's a bunch of characteristics. I won't go down each one but suffice it to say we have to address information and get it where we want with the right kind of quality, the right kind of consistency, right kind of delays, and if there's a problem we have to be able to react to it. How have we dealt with that? At least today --- and I think there are very few exceptions --- most organizations that would hire a group of network engineers, give them the right amount of capital, say get to it, I think would build the kind of network you see on the screen. Up at the top we have IP. You can argue about the religious significance of other protocols, you can argue about ATM as a substitute, but from my perspective the argument's over, it is the market-based standard. Period. It's what all the devices at the periphery speak, it's what the hardware and software manufacturers agree is the appropriate protocol of the future so the debate is over. There is too much intellectual and financial capital flowing into this area to think otherwise. Doesn't mean there isn't a possibility of an alternative, but the chances are very slim.

Next down is an ATM layer, not because we want it there but because today at least if you want quality of service you have to carve out virtual circuits and ATM and you have to assign certain QoS --- wish we didn't have to --- but maybe MPLS will eliminate that here in the short term, but today at least ATM's the way you get it done. Next down is SONET. Why? Because if you want millisecond restoration you don't have any other choice. With all optical networks, with optical cross-connects with switches with the right kind of technical characteristics maybe that changes, but not today. Finally, we've got WDM for capacity and of course the physical layer. If you look at those layers and if you look at the reasons I said they're there, you find an interesting economic problem. Each of those layers was designed differently. Each came out of a different world. ATM and SONET have come out of the telecom world largely. WDM came from the marketplace, from a start-up called Ciena, took Lucent and Nortel by surprise. Now they've reacted. Facts are it wasn't driven by the ITU or some central standards body, it came out of the market place. And, because none of those standards was developed with the others in mind, there's a large amount of overlapping functionality. For example ATM was intended as an entire protocol switch, everything from addressing all the way down to protection. So you pay for a lot of functionality you never use, a lot of overlap.

Now, let's look at the relative improvement rates in each of those layers. These are actual measurable improvement rates per say of per port. Each year, for instance, IP, for each dollar spent, gave 52% more performance. Basically Moore's Law. It's not a coincidence, it's largely computer driven protocol. ATM and SONET I believe largely because there's lots of ITU and centrally planning and protocol development rather than market-based development are moving quickly but not nearly as quickly as IP. Roughly doubling as you can see in that second column, every 40 months in performance you buy per dollar. SONET is about every 30. Burning up the track, by the way, is WDM. Doubling in price performance --- depending . . . you have to check your watch to see the right answer --- I think today it's closer to every 6 months. But when I put the slides together it was every 10 months. It's accelerating, I think John said Gerry Butters talked about that and it's certainly true --- an incredible technology, perhaps the most rapidly improving technology in industrial history.

What does that mean? Well, functionality tends to be fungible, meaning it can move around. We can do protection switching in SONET,. There's no reason you can't do it in WDM. You can do protection switching in ATM, get quality of service in ATM, but perhaps another, MPLS or some other protocol -- my bet's on MPLS --- comes along, we'll get QoS in IP in many of the same ways we get it in ATM. But economically those layers that improve the most quickly tend to subsume functionality and eliminate the other layers. So I think it's likely you're going to see an IP layer, a WDM layer, and in between a very thin SONET layer for interface. It's going to be a long time before we stop interfacing at OC-3, OC-12, OC-48. We have to have some sort of thin interface layer. But a lot of the functionality is going to drop down to WDM. With one big caveat that you all know. All that I've said is subject to unexpected disruption which will occur. It occurred with WDM, it'll occur again and again and again. And much of what I say won't make any sense 5 years from now. Some smart set of entrepreneurs will find an alternative.

Okay, what's that all mean? Well trend-line it means we're headed towards an IP over WDM network. You can do the math and get a feel for the kind of unit cost drops we can expect. This is twice the customers we can expect over the next couple years. At least trend-line. ***In our network over the next ten years we're going to spend about a third of our capital on IP routing kinds of equipment and about two-thirds on the optical layer. **** So if you improvement-weight those two categories of cost there is no reason trend-line while we're not headed to a rate of improvement in the price of bandwidth in the 80% range. Now we've got nowhere near that. You as customers know that. I'd simply say we've been locked in a monopoly rate of regulation, rate-based environment, too long. We should have seen this a long time ago. We haven't, but it's coming. It's coming. We haven't because to date carriers, one, have assumed the asset lives in their network much longer than they really are. The average telco writes its assets off every 14 years. So that's GAAP, the accountants can change that to something less with a pen. What they can't change, however, is the fact that they rebuild their networks on average every 14 years and that's a whole other matter. Procurement, design, projection and demand, and the very way in which you deploy and provision the network and that's going to take a lot of change.

Today our system is largely voice oriented. Divide the network up into 64-kilobit chunks and multiples thereof. Makes no sense. Circuit switching makes no sense. And yet there is such an investment in that technology, not necessarily in the boxes themselves, but in the entire operating support system. It's going to be hard to change. And networks, most importantly, have been built statically. There's an assumption about demand, the network is engineered to meet that demand, years later you're out of capacity. They have not been built with the notion of continuous change and continuous upgradability. To give you some sense of the implications of this kind of continuous change, let's take one network element to illustrate the point. This is fiber. If you were to ask ATT, MCI, Sprint, well what about continuous upgradability and change, they'd say no problem. Don't know what the concern is. I can swap out the electronics that occur in my single-mode network about every 40 kilometers, add more electronics, I get more capacity, I can handle not only my own needs for the foreseeable future but every one else's. Which would be true but off the point. The point is at what unit cost?

You're looking at a chart we put together with Corning and Nortel, the vertical access on the long scale cost to move information in gigabits per mile per second. The bottom, the horizontal axis, is ten years of actual data, and ten years of projection. The assumptions are there on the screen. If you don't like them, you can come up with your own, but they're sort of middle of the road assumptions, the big one is that historically the fiber rate of improvement --- and I'll define improvement in a moment --- is about 30% a year. I expect that'll greatly accelerate, but let's just assume it continues. What do I mean by rate of improvement? I mean the budget you get with fiber that you an allocate to spreading your equipment further apart, running more wave lengths of light, higher bit rates, or all three. Each generation of fiber gives you a bigger budget. How much? 30% in accordance with these assumptions. In blue single-mode fiber as you can see there's been a rapid improvement. Just as the dominant carriers have said there's been a big improvement rate for that first generation. All would be well and good if you had a monopoly except in '96 non-zero dispersion shifted fiber was introduced. Got you a bigger budget. Qwest, IXC would tell you they look at their equipment about every 80 kilometers, hundreds of millions of difference every time you upgrade electronics that you do regularly as you upgrade speed.

Big difference in economics. So at any point along the curve as you can see the second generation has a unit cost advantage. In 99 large aperture fiber --- we happen to be using Corning fiber called leaf, Lucent of course has a competitive product --- and we'll continue to see about every 18 to 24 months or so a new generation of fiber. It isn't that the old fiber generations no longer work. It isn't that those old generations of fiber can't accommodate more traffic. It's just that they won't be cost competitive. Why? Because if you simply take a look at the economics. Let's say five years from now if you're operating on single-mode fiber, and competing with next generation fiber, which according to the assumptions will be introduced about that time, you're going to end up with 8X the unit cost while operating on the first generation of fiber. It's that kind of difference. So what do you do? Well, put multiple conduits in so you can pull fiber when that's the right choice. Actually we think you only need about a dozen fibers before the next generation is going to be out. You don't need 96. And you don't need an infinite number of conduits. You just need enough so you can pull the oldest generation out and move the traffic over to the newest one when it's the right economic choice in the cycle.

I talk about conduits and fiber because it's the way to illustrate the necessity for an upgradeable network. It's by no means the only or even the most important factor. What kind of fibers need to be put in? What's the equipment spacing? Next generation spacing optimally won't be the same as first. How do you accommodate that? You build a network that can accommodate that. How do you have options on real estate? You procure. One of our largest technology providers, for instance, told me that many of the telcos have longer procurement cycles than they have development cycles. That's a problem if you want to build an upgradable network.

If you buy, or at least stipulate for the moment that you can in a properly designed network, that demand is there, drop unit costs rapidly, what about demand? I don't have to tell this audience that the great glut debate is just smoke and mirrors. It's really about economics. If we drop the price we'll certainly see an increase in demand. Why? Because it's a price elastic service. One percent you drop price you're going to get a greater than one percent increase in demand. That's not a theory, we've seen it happen before. The poster child, of course, is the market for mips. Over the last decade, decade and a half, for every one percent drop in price we saw a 2.4 percent increase in demand. Hyper elasticity. And it's not enough to have the potential to drop costs because the technology has improved, you also have to have high price elasticity to see that silicon economic model. For instance, believe it or not electricity is highly price elastic. Trouble is it took 78 years for the prices to drop any significant amount. At the other end of the spectrum hard disks have about 40% more capacity each year for each dollar spent in hard drives. Trouble is demand only went up 40%. It's been a tough market as a result. It hasn't been a beneficiary of silicon economics. Long distance has both been slowly improving in price and doesn't have high elasticity. But bandwidth and microprocessors --- I tell you, bandwidth, routing, all of those are hyper elastic and hyper price performance improvement. And that's magic.

Why am I so convinced it's price elastic? Because communications is all about information movement. And there are billion dollar markets -- trillion dollar markets --- today that are simply about information distribution. The markets for software distribution, for entertainment, for news, and video distribution all still done largely on trucks. Let's face it, all of those are moving over to the Internet. We don't have to wonder where the demand is going to come from in the short term. For longer term I get even more excited. To give you some sense of how early we are in the process of even beginning to meet the demand for bandwidth, think about this. I got in a car, got in an airplane, got in another car, spent money on a hotel room, drove over here because of the importance of physical interaction with people. We're visual animals. That's how we gather 99 point something percent of the information we process. What if we had a reasonably priced service I could have the kind of quality interaction with John that I could only get by visiting him. Be a great product. Well at one point I went out and began to find an answer to the question, what's the bandwidth to the optic nerve. Anybody know? I never could figure it out and I'm told it's not characterized yet. So I took the problem from a different point of view and asked how much information do you need to present to someone to begin to approximate reality? Assume your head's in the middle of a sphere, assume you don't need the back half because you can't see that, assume you could paint a picture on that hemisphere that was so accurate --- and we'll just use today's technologies, not high definition or anything --- accurate enough so that it started to approach physical presence, what would you need? You can use your own assumptions. I'm assuming 24-bit colors, paint a new color every 30th of a second, 30 frames per second, 2400 dots per inch, just do the math of that, that is 15 terabits, 15 trillion bits per second for two-way interaction. I mention we were only putting one cable in the conduits we have. We have twelve of them. Let's assume we fill all twelve of those with the maximum fiber available in a single cable. 432 fibers per cable. 5000 plus fibers, more than the aggregate in the industry. And we lit every one of those fibers up with 32 wavelengths, 32 llmdas, in each laser is flashed off and on ten billion times a second. We could support six of these interactions and we'd have to charge about 3 billion bucks a month apiece for each one. I have a sales contract in my pocket for anyone who wants to order the service. The point, of course, is that as long as prices drop, we're not even close to satisfying the demand that is inherent in the way we communicate.

What happens when you see that kind of dynamic, rapidly dropping prices, rapidly exploding demand in a market-based standards development environment? Well, again, I think we look to the start of silicon economics in a competing industry and say what happened. Well at the beginning of that process, if you'd have wanted to start a company. If you'd have wanted to be a next generation computing company and you talked to the pundits in the industry and they say simple, look at IBM, you have to be vertically integrated, you have to bend the metal, build the processes, build the operating systems, build the applications, develop all the peripherals, blue it together, with tens of thousands of sales professionals to hold the hands of the customers because that's what they want. It's apparent. And of course we know today that would have been the worst kind of advice. Under the influence of the dynamics in the industry there is a right model, pick a spot, pick a horizontal market and be the very best at it. Seek to get 60, 70, 80% of that market. It's impossible to be good at all that's out there. Customers didn't stop wanting one-stop shops, but to get it from Dell and Gateway and Compaq and IBM. You don't buy a processor and memory and operating system and put them together. You get one-stop shopping on a single bill. But Gateway, Dell, Compaq, they don't try to own all the assets, own the factories, it would be unthinkable, and yet today you still hear carriers say the right model is a single vertical model, own the trenches, own the conduit, own the fiber and all the applications on the operating systems, own the interface to the customers and we can already see the alternative developing in the market place.

What are the portal companies, the AOLs, the Yahoos, but the early stages of disaggregating the customer interface from the balance of the network . When I used to work at WorldCom AOL got out of the little network business they were in --- on purpose --- because they wanted to focus on what they do well. We can already see the local loop disaggregate from the balance of the network by DSL, cable modem, wireless. We can see content and data and application disaggregate, provided by e-commerce providers, application and service providers, web sites, and that is clearly the direction of the industry. Why do I say clearly? Look where the money flows. Look at the market caps of the aggregate, of the horizontal model versus the vertical, look at the capital flows. Where does the money go? It's going to the disaggregated horizontal model. Period. It isn't the customers don't want one-stop shopping, which has always bandied about. The simple fact is they don't care whether one company owns all the assets or not, they just want the service. And we're going to get that service from large groups of companies. No company is good enough to be everything to everyone everywhere in today's environment. Oh, by the way, I told John I wouldn't have many commercials but that little red box happens to be what my favorite company is focusing on, switching transmission of high bandwidth infrastructure, we don't want 3 or 4 % of that market, we want 70, 80, 90%. Somebody's going to get it. It might as well be us. Okay, but. Here's capital B, capital U, capital T. While I strongly believe the model I've just described to you is accurate, and while I strongly believe the future belongs to multimedia, IP-based multimedia in the future, there is a fact which overrides much of what I've said so far at least today. While data represents over half the bit flow on global networks, voice still is 92% of the revenues collected by all carriers, ISPs, dominant, next generation, 92 cents of every dollar is still voice. Why is that? Because voice bits at priced at 14 or 15 times data bits. The great arbitrage opportunity the whole industry's focused on. It's not going to go away over night. It is still Jerrymandered by 100 year-old regulatory processes. It's still Jerrymandered by the claim that universal service is necessary without anyone really knowing what it means, without anyone even trying to trap the subsidy flows. And that's not going to go away over night. It will go away because this process of change I think is irresistible. It's going to take a little time. Well how do you deal with that? Well, our answer to that and I think a lot of organizations' answer is something called softswitch architecture. .Now, what do I mean by that? You take the kind of functionality you only get in circuit switches, port it over to a general purpose computer, running a general purpose operating system. --- NT, Linex, Unix --- and get circuit switching on the kind of performance rate improvements that we've seen in the data world. And why hasn't that happened? Well, ask yourself the question and get as legitimate an answer as mine because in the data world thought that dealing with all the rigamarole in the circuit switched world is something akin to dealing with Satan himself . . . [To be continued]
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext