SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Cisco Systems, Inc. (CSCO)
CSCO 73.11+0.3%Oct 31 9:30 AM EDT

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: David B. Logan who wrote (22931)2/21/1999 9:56:00 PM
From: Frank A. Coluccio  Read Replies (2) of 77397
 
All: This is OT and long. Brainstorming, mostly. You've been warned.
=================================================

David,

Thanks for taking the time to respond, and for the finer points you made in response to my 'stupid' observations. Heck, that sounds downright self deprecating...

Let me preface by saying that I only interject topics like this here in the CSCO thread on weekends. During the trading week I've noticed that you folks have enough on your hands with clutter and certain annoyances.

As you've doubtless surmised by now, I was attempting to stimulate discussion on a couple of topics that are blurring my vision. I don't profess to be a seer, or have any of the answers to these questions. Only some observations and opinions, and a lot of questions of my own.

As we speak, there are qualities and properties evolving about the Internet that are emerging in both a "real sense," guided by hard-and-fast networking principles and parameters, and in another sense, they are being reflected as a change in the lexicon of the discipline. These two movements are not necessarily synchronized, however, as I think Curtis indirectly was suggesting in his message.

Likewise, where there has up until recently been a fairly consistent fabric that comprised the Internet for many different purposes, today we are seeing divergence in multiple ways, and the emergence of multiple distinct dimensions of the 'net, not the least of which being defined by closed user groups, or communities or interest (CoI). All the while, the least common denominator for these continues to be its basic protocol, IP.

Consider that there are now corporates who access the core directly, some by way of VPN, where they are accessing partitions on routers and lines, by way of dedicated SONET ports. There is a Next Gen Internet in the works, as there is an Internet2 for research and education. Note that the Next Gen Internet and Iinternet2 are not one and the same. End users are now jumping on, and staying on, through "always on" capabilities made possible through DSL and Cable modem, while their peers in less fortunate circumstances are still using dial up networking, DUN-ing it.

Overall, the net has been changing in these ways at such a pace, both in size and feature attributes, and now in strata determined by CoI, that there has been more delta in the past three of four years, constantly accompanied by massive growth, than there has been during the entirety of the preceding 25 years. The comparison may be even starker than that. So much for setting the stage.
-------------

You note, quite correctly,

>> the biggest problem that all of these startups have is not related to technology, but install base and account control. Next would be market fragmentation by so many companies chasing the same dream...<<

I agree that that is a major hurdle for them, one that they will also face, and we'll eventually see consolidation resulting from this. But this doesn't minimize or detract from what I was saying, regarding the changing landscape. That is, that of the collapsing core, where some backbone providers and larger ISPs are concerned. These are, after all, the candidates who will be seeking out the use of terabits, first. In many ways this represents a new playing field, if it actually comes about, and the chance for startup router vendors to show their stuff in a make or break way. It is in this light that I meant that Juniper could make some hay if they have the goods.

>> Are you implying that Juniper's router, et al. are making network edge-to-edge path forwarding decisions, and not relying on upstream/border routers to aggregate paths?<<

I'm saying that they can be made to work in an optimal fashion this way, as you've described. If the tendency is to reduce hop count through collapsing the core, I don't think that they will have a choice to do otherwise. The big word here is 'if,' and that is going to be at the discretion of the individual larger SPs.

It was only recently that NAPs were constrained in both size and number (five or six?) and throughput capacity was on the order of a few T3s and multiple T1s in each of those NAPs, used for forming the mesh between them all.

The only way to get onto the backbone was through a series of sub-to-POP-to-border- to-core hops, and back out again in reverse order. There was no question at that time (as is the case today still, to a very large extent) that this was the only option. A number of factors have changed this, however. Today there are ever increasing numbers of meet points, both public and private, and bandwidth has reached surreal proportions, comparatively speaking - with no need to look back in time before 1997 for such a comparison:

Two years ago MCI and Sprint came out with a set of releases that they were upgrading their core backbone links from OC-3 to OC-12, and the other Tier Ones followed suit directly on their heels. Today, in contrast, an OC-12 is considered but a mere virtual tributary on the much larger OC-48s and OC-192s (particularly on QWST's facilities) with multiple streams that occupy sometimes dozens of discrete, yet coupled-at-the-box, lambdas.

[This all bodes very well for the larger SPs/. But countering this increase of bandwidth, where the smaller players are concerned, is the fact that the Tier Ones appear to be getting less permissive about entry, and more demanding in terms of settlements. More on the smaller players in a bit.]

Today there are ever-increasing numbers of private NAPs, neutral meet points, colo hotels, and other peering locations of varying profiles. Some real, some virtual. These make it easier for an ISP or backbone provider with presence in multiple cities and countries to bring their customers directly onto their own dedicated backbone links, cross-country, or across the globe, without crossing over to others' facilities due to constraints on routes. When they have traffic whose end-point pairs mimic the locations of theirs and their peering partners' in global terms, why would they want to hand off to anyone else? In this sense, I see a great deal of merit to the dumb shotgun approach, provided my stated caveats above (size and ubiquity) are met.

In this following statement of yours, I sense that we may be, on some level, on the same track:

>>Granted the "edge" here is not really an edge, given where these products are installed.<<

Again, assuming that we are discussing Tier One ISPs and the largest of the backbone providers, this trend will continue with the proliferation of fiber-optic increasing numbers of fiber optic lambdas being fired up each day, and as the number of colo hotels become outfitted for private peering activities increases, IMO.

However, only a portion of any given provider's customer traffic flows will can in reality take place this way, intrinsically. Unless they have achieved total, not near, ubiquity. Otherwise, some percentage of their flows will always be diverted to others' lines and routers. I've really oversimplified this last point. I hope that I'm getting my point across, even if I may be off the mark by some measure.

As I stated or implied in the post to which you replied, there is increased attention being paid to stupid networks and the enhanced role of the Edge. David Isenberg, to some extent, has popularized this theme almost in a way that they are now terms which are identified with him in some ways, resulting from two of his papers which treated this subject directly.

These papers can be found at isen.com :

"The Dawn of the Stupid Network"
isen.com , and

"Rise of the Stupid Network"
isen.com

Since those writings, he's been publishing a monthly series of articles in America's Networks Magazine at:

americasnetwork.com

While the term "stupid networking" attempts to connote a gross simplification of networking activities, on the surface, in reality it is being introduced at a time in the 'net's history when circumstances are getting far more complex, if anything, than they have ever been before. I attribute this to a plethora of causes, but to keep the focus on technical parameters, I assign this to the increased attention being paid to differentiated classes of service (CoS), and the need to guarantee service levels through quality of service (QoS) gradients. Multi-casting, new directory services RFCs, also play into this, as do security, PSTN convergence, and so on, as others have pointed out here as well.

Cumulatively these are in response, top an overwhelming degree, to greater expectations from users for the delivery of time-dependent services, such as voice and real time video, and guaranteed sub-second response times for mission critical applications.

The way I see it (and this is still up for interpretation as far as I'm concerned, until the working groups legitimize some of these conventions of the future through acceptance and stating so), "stupid" here implies the "shunting across distances, from one edge to another edge," thus obviating many of the internal hop points that might otherwise be encountered en route, including the elimination of many of the constructs we've heretofore considered the "intelligent" core. But I agree with Curtis that intelligence doesn't go away entirely. The last example of stupidity I used here can only apply to "express" traffic. Right? Or not?

The shunt may alternately include a number of dumb hop points - a dumb ole core, if you will - similarly equipped with dumb forwarding capabilities, with specific instructions and addresses resolved ultimately in the edge devices and in the end points. The exceptions that I see here will have to do with multicasting-like attributes, and certain newer services whose endpoints are non-determined by any particular provider's capabilities.

Again, these imagined examples of mine are my take on what may be the norm over time for Top Tier operators. The smaller guys have the underpinnings to effect these capabilities on their own, and will probably get squeezed out or acquired as a result. Routers are not the only things that will be exposed to consolidation in this space. This implementation of stupidity, IMO, on the parts of the larger players will result in the biggest Darwinian criterion facing the smaller ISPs in the next two years, if other (financial) burdens don't get to them first.

It is this shunting that creates a new kind of core, in effect, that routers such as Juniper's "core routers" can exploit, if adapted, even though the transiting path ignores many of the traditional interior routines and machinations which are commonly associated with yesterday's and today's core. If we don't want to call this new generic path a core, then "span" will suffice equally well.

>>But, the core of Internet still heavily relies on distributed routing and intelligence<<

I'll grant you that it still "does," but will it to the same extent in the future, and for the same proportion of flows that it does today?

Recall my caveat above, which stated that some portion of the heavy
flows would always be diverted, when the end point pairs didn't mimic the cities where the provider or its partners were located.

When they employ the shunt, all of the next gen tera-bitters will use lower layer-ish protocols, with the optimal ones porting IP directly over sub-Layer One directly (or via very thin layers one and two, before mapping to a lower sublayer of Layer One). These will include Sonet and ATM for an interim period, and perhaps remain that way for those situations where they become embedded or legacized. With evolution, the multi-terabit shunts will leapfrog these older protocols, and go directly to some dimension at or below/between lambda/s.

>>What specifically are you referring to? DiffServ or MPLS for QoS? RSVP? RTP? <<

I don't know about RSVP anymore, in the larger context, maybe on LANs,
due to the many setbacks and delays it's experienced on the larger net, but yes... I'm referring to the RTP/RTCP, diff-/ int- serves, iptel, multi-casting, etc., while they are needed on the backbones, in any event. Will they be a requirement if the shunt model becomes ubiquitous for any given player's fabric or topology? I don't know. Will the stupid net dictate that many of these functions be simply reflected back at strategically selected edge devices, back- hualed, so to speak, to their ultimate destinations like a ball in a pinball machine? Don't know that either.

Maybe you or someone else here would be kind enough to set me straight on some of these issues. It's been fun, in any event.

Best Regards, Frank Coluccio
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext