SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Internap Network Services Corporation

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Alohal who wrote (282)11/2/2000 9:05:01 PM
From: neverenough  Read Replies (1) of 1011
 
Saving the Public Internet
by Scott Mace
Boardwatch Magazine

The history of networking, and computing itself, is rife with tension between faster, cheaper, better (pick any two). On the microprocessor level, I’ll go out on a limb here and speculate that our chips have gotten much faster and cheaper than we need them to be. But just because a CPU is running at 1 GHz doesn’t mean we have a better experience, especially on the Internet.

Similarly, it now appears that we already have enough raw, long-haul fiber-optic network capacity in the ground to last us a few years (which hasn’t stopped what appears to be a long-term assault on our streets as even more trenches are dug for more fiber). And the speeds at which we can transit data along those fibers are mind-boggling.

So if we have all the CPU power we could use, and all the fiber we could use, why are public Internet service, throughput, reliability and latency so terrible?

For more than a year, answers have been sought by the members of the Quality of Service Forum, managed by my company, Stardust.com. Through a series of impressive interoperability tests, the QoS Forum (www.qosfo rum.com) has put technological answers proposed by the IETF through their paces at our iBAND events. But here, too, technology hasn’t proven to be enough — so far. Just because we can prioritize public Internet packets doesn’t mean it’s being done in the public Internet. (Packet prioritization over private networks and bandwidth-starved WANs has been a smashing success story, however.)

The problem with the public Internet is the way it’s put together. Readers of this publication probably need no primer on the notion of peering points, places where traffic between two ISPs is dumped (often unceremoniously) to networks of destination.

To call this a "best effort" Internet is a misnomer. It’s often "worst effort" with no guarantees that the traffic will be treated well once it is dumped. Why? Because no money exchanges hands between these two ISPs. There is a financial disincentive to treat those dumped packets well. It’s amazing to me they even get delivered at all. And sometimes, as with this spring’s denial-of-service attacks, they don’t (more on that later).

PEERING INTO THE DARKNESS

These peering points, dozens of them between the Sprints, the UUNETs, the AT&Ts and a variety of lesser lights, represent roadblocks preventing progress. No one knows exactly how these peering points work because these unilateral contracts are heavily guarded with nondisclosure agreements. But it’s clear what effect they have: They are the primary element preventing widespread public adoption of the protocols championed by the QoS Forum and others, which would make the Internet a network of services, instead of merely a network of networks.

So the Internet is continually looking at an approaching fork in the road. One direction it can take is to evolve further away from a public resource to a series of private domains — the "walled gardens" spoken of so often in these pages.

Today, we see a proliferation of new private networks designed to move packets with loving care from one edge of the Internet to another. Everyone from start-ups, such as Akamai, to traditional ISPs sees dollars — and captive customers — in such strategies.

As long as your long-haul traffic jumps directly from uncongested LANs to, say, Akamai’s network, riding all the way to the customer’s Akamaized ISP, life is good. Certainly it has been good for Akamai. More ominously, a private network means disturbing questions about whether almost all content will be on all networks — the greatest strength of the Internet today.

COMING INTO VIEW

Of course, it wouldn’t be capitalism if we didn’t have, naturally, a rival to Akamai, and one has appeared: Content Bridge, backed by Adero, Inktomi and customers such as America Online. And as of this writing, Cisco really hasn’t taken one side or the other, so I would have to say the lines of the walled gardens are becoming more distinct.

Clearly, the Akamais of the world have filled a need the traditional best-effort Internet wasn’t filling, and they’ve found more than enough content providers to pay them big bucks to give special treatment (including caching) to their content.

But mass content delivery is only one aspect of the future Net. What about messaging, videoconferencing and the dozens of other applications that don’t fit into the Akamai-style bucket? Are we headed back to a CompuServe-type private system for those problems? Will the public Internet split apart or become irrelevant?

Not if Tony Naughtin has anything to say about it.

BREAKING DOWN THE WALLS

Tony Naughtin has built a truly remarkable business called InterNAP (www.internap.com), which, for the first time, uses microprocessing power to try to untangle the peering problem and evolve the public Internet. And so far, it seems to be paying off for all parties involved.

In case you think the peering problem will solve itself, ask yourself: Why is it that, according to Naughtin, not one of the top 12 or 14 major global Internet service providers is making money in that business? Fortunately, most of them have other businesses from which to generate revenue.

But eventually, would you want to be in a money-losing line of business? Similarly, if the answer is simply to light the 93 percent of fiber in the world that’s sitting dark, why is it that it remains dark? Could it be that simply flooding the market with bandwidth isn’t enough, that ISPs need to be able to make money and not simply see bandwidth prices drop through the floor? You know the answer to that one.

I don’t believe Naughtin’s company will be the only one to solve the peering problem, but it is the first. It has built the first Internet backbone with no long-haul fiber optics. It’s a set of private network access points that connect 11 (at this writing) Tier 1 backbones carrying IP traffic.

Obviously, those 11 backbone providers, and their customers (many of you), are already doing all they can to increase long-haul bandwidth capacity, and building last-mile capacity out to customers. A variety of other service providers are working with both ISPs and ASPs to optimize server and router performance.

All this is well and good. What InterNAP observed was, as backbone providers tried to peer with each other, an interesting thing happened. There were diminishing returns if these providers tried to solve their peering performance problems by simply adding more peering interfaces. Chalk it up to the way route advertisements work. According to Naughtin," You could have 20 interfaces between Sprint and UUNET and the fact is you’ll see that about six of them handle about 80 percent of the traffic."

InterNAP improves service between networks through peering avoidance. As of June, the company had connections to all 11 major Internet backbones at OC-3 or multiple DS-3 speeds, with service level agreements and, more importantly, full global route advertisements through each of these IP transit connections.

There are two key aspects to the intelligence InterNAP adds. First, it uses advanced routing techniques such as BGP local prep inbound, which manipulates route advertisements within the connected backbones to keep traffic on those backbones all the way to the PNAP of destination.

The other half of the intelligence is how InterNAP keeps those route advertisements current. InterNAP’s Red Hat Linux-based software, known as the Assimilator, takes continual snapshots of all global routes advertised to InterNAP by each of the connected networks. Assimilator then determines where traffic needs to go on an automated, dynamic basis (InterNAP can also broadcast updates to all the routing tables inside each of its PNAPs every few milliseconds).

InterNAP’s network customers represent 95 percent of all destinations in the world, Naughtin says. The company only pushes traffic at a given network, such as Sprint, which is intended for its own paying customers. "Guess what Sprint gets for the first time — it gets economic fulfillment, and manageable, predictable economics," he claims.

More importantly for customers, geographically local Internet traffic stays local. A significant and growing portion of all Internet traffic is destined for local addresses. I’ve always been mystified to learn that local traffic often heads all the way across the country and back. Latency and response time in interactive applications aren’t helped when this happens. InterNAP is the first company to give all application services — not just mass content delivery — a way around this "feature" of the Internet.

Since InterNAP provides a virtual Internet backbone, Naughtin calls InterNAP a virtual service provider, which, to me, is just another way of saying it provides service, not merely connectivity. Part of Naughtin’s definition of a VSP is that it provides some proprietary technology.

I’m not certain this is a bad thing, as long as InterNAP doesn’t abuse its position to prevent some large ISPs from participating in its network. It seems like the Switzerland of VSPs to me. It’s not trying to sell voice services to me (like some Tier 1 providers) or TV services, or any of the other things that color the Internet offerings of many providers. To me, InterNAP is the purest IP play today, and making plenty of money at it.

RECOVERING FROM DENIAL

There’s one more quite timely, compelling reason for an InterNAP to exist. When the denial-of-service attacks hit big time this past spring, InterNAP customers, such as Amazon.com, were some of the quickest to recover.

Naughtin explains: "From our NOC, we immediately saw an extraordinary big jump in ICMP traffic going to Amazon’s destination, which is connected through us in two locations nationally. Within a couple of minutes, we generated a table that identified over 400 servers out there on the public Internet that were attacking Amazon. Then we engaged in multi-layer filtering on all those routes on the connectivity Amazon has into [our] PNAPs. Then we took a step that no peering-based network can take. We called up all 11 of those network providers and said, ‘We just mailed to you a routing table. We want to you filter out all those server routes in that table immediately.’ We stamped out Amazon’s DoS attack in 19 minutes, not four and a half hours like Yahoo! was offline or two-plus hours like E*Trade was offline."

That kind of ability is gaining InterNAP many Amazon-class customers, as well as CDN providers such as Akamai and cutting-edge Web infrastructure builders such as Loudcloud. How much would it take to evolve such a VSP into providing the truly world-class quality of service for which the QoS Forum and others have been striving?

If InterNAP wanted to introduce IETF-standard services (such as QoS, or multicast) on top of its existing services, it might just take off, since, in some sense, the "core" of the Internet would now be InterNAP, not several dozen dysfunctional peering points.

I really hope InterNAP’s patent-protected story is replicated by others who come up with equally clever ways to route traffic and assure quality of service. Maybe it would be technologically confusing to have two InterNAPs out there. I don’t know. For now, it’s good to know there is at least one. It might not forestall the danger walled gardens pose, but what’s clear to me is that without such answers, the public Internet as we know it is toast.

ispworld.com
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext