SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : The *NEW* Frank Coluccio Technology Forum -- Ignore unavailable to you. Want to Upgrade?


To: Whitebeard who wrote (20097)3/8/2007 7:03:07 PM
From: Frank A. Coluccio  Read Replies (1) | Respond to of 46821
 
Hi stillholding,

I'm not sure if I've ever stated, or in recent times stated that, there is no bandwidth scarcity, but I've often discussed the phenomenon of shifting bottlenecks that almost pendulumatically and predictably vacillate between last mile, second mile, core and service provider intentions as manifest by their gate keeping functions.

These often give the outward appearance of intrinsic bandwidth deficiencies, but their root causes could and often times are something different. Sometimes they result out of business reasons through throttling, or through the offering of differentiated services resulting in deprioritization of best effort flows. Sometimes they are due to the inability of routers/switches/servers and caches to adequately queue and forward flows. And so on.

"you say delivery of hi-def is easy via satellite, but not so easy via cable. What about the web? where are the bottlenecks? Download times?"

The Web is a logical abstraction that has nothing to do with bandwidth shortages or abundances. This may seem like a nit, but it's a critical distinction if one wishes to assign weight to the factors that make a difference. The underlying Internet, in turn, which is responsible for handling many of the Web's flows, is made up of a multiplicity of independent service providers that are loosely connected, usually in non- application-specific ways.

By the latter I mean that, the degree of connectivity (the amount of bandwidth) that one could assume exists between any two providers in the chain, or within any domain made up of coopering/peering providers, is random. That is, it may be more a function of the business model, or the business health, of the provider than their intentions to build to specific applications' needs. On top of that, there is no one orchestrating the constellations of service providers that make up the Internet that could leverage, or synergize, all of its CDNs (content distribution networks) in an optimal fashion. So, as Akamai may be lying fallow one evening, Limelight might be getting the crap kicked out of it.

Contrast the architectures of MSOs, RBOCs and even Satellite Providers, which are optimized for application-specific delivery, to those of the ISPs tailored for general usage that comprise the Internet: Structure, marked by predictability vs. Chaos, by randomness. Each has had its place, sprouting benefits and disadvantages, and this dichotomy will continue to coexist, albeit under continuously changing conditions, into the future. Stated differently, as parts of the Internet assume the roles previously held by the SPs they will be re-shaped in ways that are antithetical to the canon of Internet design.

This has been happening for a while now, and it will continue to define the evolutionary path of the 'Net for some time to come. We've seen this most recently, and perhaps most pronouncedly, in the area of VoIP, which began as an application (iptel) that could leverage the Internet, and has turned out to be, instead, a modification to many of the parts (through the addition of new directories, mgco's, sbc's, etc.) of the Internet, itself. The same could be stated for content distribution networks, like Akamai, though the addition to the Internet of caching and reflectors, which are scattered about the globe.

FAC