Scott, this is in reply to your posts numbered 690 and 697.
That was some truly enjoyable and inspiring reading you put us on to. I take my hat off to those who've dedicated the time and effort to formulate such an architecture (and I use the term 'architecture,' here, in its more global and painstakingly comprehensive meaning, and not simply the placement of boxes on a block diagram).
When I posted my original statements concerning edge and core dynamics, I made them using notions and precepts derived from an underlying set of heuristics at my disposal, that only a long list of experiences could explain, but I didn't state what those notions were. In response to your posts to me, and for the sake of _someone_ taking on the devils part in this discussion (now that I have had the benefit of reading the information you've provided), permit me to explain where I see some potential glitches on the screen, at least with regard to the use of this model on the Public Internet.
The NLANR Global Hierarchical Caching model follows the rules established by the old Chinese wu wei principles, as one might apply them to the trickling of water in streams, the gravitational effects of pebbles rolling down a mound, rocks falling from a mountain top, and the slippage of tectonic plates. In other words, they take into account natural laws that dictate the effects of proximity and spacial relationships. Another way of looking at wu-wei is found in the Mandarin Chinese word meaning "spontaneous action" or reflexive action. This literally translates into "doing nothing, yet accomplishing everything." Hmm.
The hierarchical model shown in the links is not only resemblant of traditional networking schemes, it is identical.
Consider two models first the caching model, and then that of an ATM network.
First, the NLANR Caching architecture:
Int'l Caching / Nat'l Caching/ Reg'l Caching / Org'l Caching/ user;
then, for the ATM network:
Int'l Core/ National Core / Edge / Enterprise / User.
They each represent an identical order to one antother. Only the labels change.
As I toured the links I was reminded in many ways of numerous similar staggered messaging systems, some of which were store and forward, granted, but similar in principle, nonetheless. They initially depended on PDP-8 systems to deliver Teletype messages going back to the mid to late Sixties; and then the Series One IBM model; and later DEC platforms that were used in email messaging systems; and the later video on demand (VOD) architectures that have been contemplated since the mid-Eighties, right up to the current day. Granted, nCube uses a slightly different topological approach, but they still respect the same basic parameters. Some hierarchical memeory storage models and some storage area networks (SANs) also come to mind.
Each of these followed, and continues to follow, in one way or another, a logical order as dictated by time and space, or distance. These rules also apply to the NLANR model to a great degree, as far as I can tell. And logic dictates that it too should work, provided that certain constructs are in place and honored with the application of due balance and harmony, held to some level of precision.
But those constructs are increasingly _not_ in place today in all corners of the public Internet. Larger ISPs with ubiquitous coverage may enjoy this kind of evolutionary migration, but mid-sized and smaller ones, the majority of all ISPs, may be left out in the cold, as I explain, below.
When speaking in terms of 'nailed-up' nodal relationships such as the other highly deterministic models I've offered above, there is a constant that can be depended on, namely, the "physical" relationships which are characterized by tightly controlled time and distance metrics between the various strata of those hierarchies. {Time in this sense, at least in part, is a function of bandwidth, since the fatter the pipe, the less time it takes to complete a task.]
The internet poses new problems here, since 'virtuality' sticks up its head, and, while it preserves the same kind of "logical" relationships, in many instances it violates the rules of the "physical" relationships needed to make this work in an economical manner.
This would not present a problem if the Public Internet were truly an egalitarian and sharing platform, as it once was, since the more contributions in a shared manner that exists in this hierarchical approach, the better it is for the overall model. That would eliminate the differences I am about to cite.
The public internet that existed in the days of educational and research computing over the 'net are no longer with us, and ISPs are increasingly proprietizing their resources as they rush to differentiate themselves in the marketplace with respect to CoS/QoS, and yes, eventually caching, too. Such differentiation allows them to bundle more services, and in many ways "lock in" customers to their services. Why would they want to share these competitive advantages for free? They don't.
The offshoot is that they make themselves increasingly _un_available to tier-downs without onerous settlement fees, which, of course, the smaller ISPs cannot afford in the first place, otherwise they would be doing it themselves. Where does this leave the end user of a mid-sized or smaller ISP with regard to access to tiered, or hierarchical caching architectures?
What I am questioning here is _not_ the ingenuity that goes into constructing such an hierarchical architecture, rather, it is whether it will find universal relevance in the face of some 6,500 ISPs of every shape, size, geographic distribution (some of whom have non-contiguous centers of subscribers), and fiduciary means.
In the light of what I've stated here, I stand by my previous arguments, as well, as they relate to the economic impact on many ISPs who would need to deploy "their own" hierarchical caching architectures. A $10,000 server is one thing, but personneling it and other indirect costs that go into the ISP's caching model, as I think you are aware of, not the least of which are colo rentals and line costs, add up to considerable annual sums, times _n_, especially for smaller players.
Therefore, it is not surprising that the NLANR model is being fashioned in a parallel effort with vBNS under the auspices and support of the NFS. In this instance, since there is a high level of sovereignty to their proposed operating environment, there exists an ideal set of circumstances wherein law and order can be preserved, without the outside influences of commercially motivated interests to differentiate it to the point where it resembles an ordinary, run of the mill, assemblage of interexchange carriers, with their attendant pricing structures.
I don't 'want' to be correct here with regard to the architectural realities I've cited, but I think that I am. The business case? Well, that's another matter entirely. Or is it?
I look forward to hearing your reply, demonstrating how the caching model will work in a pluralistic model such as we now have in place. Or, perhaps you will reply that the Internet has indeed gone the way of full commercialization, relinquishing the model of peering and cooperation that got it this far, an that it will be purely up to Darwinian factors from here on out. It's certainly looking that way more and more to me. And maybe that ain't all bad.
Best Regards, Frank Coluccio |