SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : The *NEW* Frank Coluccio Technology Forum -- Ignore unavailable to you. Want to Upgrade?


To: Thomas who wrote (4680)12/19/2001 5:01:22 PM
From: Frank A. Coluccio  Read Replies (1) | Respond to of 46821
 
Hello Thomas, I'm glad you brought up the congestion factor, since I didn't want to be the one "again" to douse anyone's flame ;) You may have read something about a glut od capacity in the core. Well, tapping that core bandwidth "glut" still represents a very high "monthly recurring" cost component for ISPs, especially those smaller, more-aggressive ISPs who'd be considered greenfielders in the context of wireless or fttx frontiering, if they wish to deliver on the promise of providing access to abundant amounts of bandwidth across the net. Soon they will find that a virtual bottleneck shifts back to the WAN, and not the access network itself - once they'v alleviated that problem with rooftop or fiber to the home alternatives. Only, this time the virtual botteneck re-emerges due to reasons having to do with bandwidth unit costs to the cloud, as opposed to mere availability of facilities in the air or street. That's why I've termed it a "virtual" bottlneck, and not a physical one, per se.

[edit: a late thought: Already, we've seen one FTTH provider in Sacramento who is capable of delivering (and does in fact deliver an aggregate level for all bundled services of) 1 Gb/s to each home served. Yet, they limit their end users who use this service for "consumer grade" Internet access to 1 Mb/s, both ways, thus reducing the potential that would oterhwise overwhelm their back end T1s to the cloud, where they very likely have reserved a minimum level of just-in-time bandwidth to accommodate as-needed capacity.]

And thanks for sharing your earlier experiences with wireless net access. A number of thoughts surfaced when I read your post, mostly having to do with the economic tradeoffs that an ISP must face when deciding on how much of their user demands on upstream bandwidth they could economically live with and still turn a profit.

Say, for example, there were two thousand subs in the ISP's serving area. If they were all doing bursty email, or occasional file transfers, several to a half dozen T1s would suffice, maybe even a fractional T3 or a whole T3. If, on the other hand, even as few as a handful (e.g., forty (40) of them, or a mere 2% of the entire field) were pulling down music labels or near-VoD, or performing ftp's or other large file transfers all at the same time, then the ISP would require a minimum of two T3s, each rated at 44.7 Mb/s <$$$> just to support the small handful of users [20, in this case] whose demands were greatest. See msg# 4310 in this forum if you've not read this tale of grief before.

An extreme example of the Pareto effect begins to take shape here, as you can see, where the demands of 2% of all users might account for greater than 90& percent of the supply of upstream bandwidth to the cloud.

It's very often that "backend" of the ISPs provisioning, to the cloud, that is the limiting factor, which gets them in the pocket book in the end, if they attempt to assure max throughput to all users on an even keel.

Of course, with TCP being the fair-play arbiter what it is, this doesn't actually happen in most cases, where the ISP stays with a limited upstream supply. So, everyone's - including the other 1,990 users' - realized throughput come down, accordingly. This is one reason why I feel that the incumbent access providers who now support cable modem and dsl don't really have any incentive (indeed, they have a disincentive) to permit higher throughputs at this time, because it would bankrupt them to do so unless they started to charge by the megabit and by class of service in order to assure some semblance of quality, or QoS.

This latter usage- and cos-based approach is already beginning to re-surface (in earlier times, it was for a while thought to be the way to go, until the all-you-can-eat model became fashionable) for various grades of services, as the recent AT&T takeover of their own cable modem population has begun to suggest.

Other factors center on "who" is the upstream provider that is supporting the primary ISP, and whether their upstream providers, themselves, haves adequate bandwidth to support what is taking place in the last mile? The technical makeup of the primary ISP's infrastructure, as well, plays into the formula, whether they have ample processing power (routers, dns and email servers, etc.) which dictate efficiencies within their own routing domains. Also, does the primary employ any form of caching?, or does "all of the web-based traffic" generated by their end users come directly from target sites on the cloud, directly?

Not to be overlooked are the relationships that the ISP has with other entities, and their philosophical leanings that may affect how they are regarded by other ISPs. For more on this last point see my next post, which contains several comments from Gordon Cook on this point, in response to my sending him the urls of this discussion going back a couple of days.

FAC