re: cache flows
G Howe, good to see you here.
I don't know if it is entirely as true as I have made it out in many cases. I need to explore some of the finer points before becoming convinced one way or the other, and discern more clearly the options available now and down the road. But it certainly is clear to me that for every x percent of improved response time efficiency, some multiple of x in bandwidth demand is created on the greater network.
If the amount of data being retrieved by users is sufficient from the edges, then it will have been worth it, and the greater benefit will be realized by all of the 'net's users, since overall backbone congestion will prove to be less. If they don't receive the draw, on the other hand, then... One needs to examine this and determine where the inflection point takes place. And the type of application and its delivery mandates will make a difference, as well.
If the application and its payload type is constant (or relatively constant, like a Britanica, say), then I see a case for pre-loading, preparing servers once, and only adding and delta'ing information over the longer term. Fed Ex'ing CDs [don't laugh..] to server sites could further mitigate the bandwidth burden caused by updates in many cases, if the volumes and file sizes warranted. The delta form of loading (wherein only changes are added over time) would have the effect of further minimizing traffic contribution, in comparison to other forms of data uploads which might require an entire refresh for each transaction, or each presentation.
Near- real-time canned multimedia, on the other hand, such as one-time presentations, analysts CC's and VC's, sporting events and concerts, distance learning, infomercials, Fortune 100 shareholder meetings, etc., would have a profound effect on total backbone consumption when using this model, if it extends to hundreds or thousands of data centers which are first proprietary to the service providers, and then to the exponential number of enterprise servers which are situated at users' own facilities.
At some point the bypass routes, the ones taking advantage of the lulls in traffic, run into congestion problems of their own, which necessitates the creation of private links (private lines once again) which takes us further away from an all IP concept. Not saying this is all bad, but it does add cost and additional points of failure, along with other architectural concerns, since we now get away from the underlying and unifying qualities of IP. I can see where this could lead to stranded assets which are not fully leveraged, over time.
The idea of synthesizing payloads at the edge, after templates are already in place, also portends some impact, but not hardly as much as the canned events which are played back on demand.
There is no doubt in my mind that the Internet can withstand this at the present time. Refreshes, in fact, need not take place at actual application speeds, and could instead take place at trickle rates taking advantage of available holes in traffic profiles (between the peaks).
My point, however, was this: How well does this model scale when it is adopted by more than a handful of large content providers? Say, when it reaches status quo proportions? More food for thought. Another question that occurs to me is, When new vistas of bandwidth make themselves available through breakthroughs in lambda liberation, does it really pay to convolute flows when more immediate and affordable routes are available? This is not the case yet, but at some point one must wonder where that point of inflection takes place.
Regards, Frank Coluccio |