SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : George Gilder - Forbes ASAP -- Ignore unavailable to you. Want to Upgrade?


To: Bill Fischofer who wrote (867)12/5/1998 7:19:00 PM
From: Frank A. Coluccio  Respond to of 5853
 
>>predicting that flat-rate net access will soon end<<

It will not be the "access" that will begin to cost more when you get right down to it, rather it will be the use of the "core" and the newly created core-dictated QoS-dependent services, that will cost more.

Access, however, is the traditional means by which we associate pricing metrics, so it is there that the reference has been made. But there should be no doubt that the new burdens of the carriers and the ISPs will be at the edge and deeper into the cloud, all four: bandwidth, software, hardware and policy-wise, once access speeds have been increased sufficiently in the user community.

Higher speeds in the access networks are quickly coming about. These will cause the Internet's core, and to a great extent the edge as well, to undergo a form of implosion. That is, the sucking sounds that used to be heard in T3 and OC-3 pipes in the core [the core doing the sucking from the edges and the access nets] are about to reverse in direction. And to add to this, the internal mechanisms of the cloud will begin to work overtime, straining to keep up, until relief in the way of upgrades can be brought about.

Here, I am not referring to the top tier ISPs whose intentions are to procure Terabit routers at some point, when they prove in. Instead, think of the thousands of ISPs whose meshes often come into play, who cannot afford this luxury, and who will continue to use what they now have in place for some time to come.

Which all means that the throughput blockages that were once in the last mile are about to shift, in large part, to the backbones. A shifting of the bottleneck from the last mile to the cloud.

This wont happen all at once of course, since many users will continue to be deprived of higher speed access. But a great many of those who are now, and who will, receive new cablemodem and dsl access will make a profound difference to the core, and to the localized edge environments where these services are suddenly made available. Not to mention to the individual service providers [many of whom are mom and pops who could ill afford this] who must then go out and secure higher speed T3 45 [Mb/s] and OC-3 [155 Mb/s] pipes to their next tier up towards the core.

Not because there wont be enough intrinsic "bandwidth" in the core, forsooth, there's already plenty of that, and much more on the way. Rather, because there will be a lag on the part of other network resources in the core to catch up to demands being made by the new access speeds, and the newly enabled multimedia applications they will soon support.

For the past six to eight years, or however long it's been since the 'net has been popularized, the core has waited on the edge and the user distribution networks, so the need to speed things up in the core was less pronounced than where the obvious bottlenecks historically resided, e.g., the Last Mile.

The core resources I'm referring to are the routers, and their port speeds, in the main, especially in the Tier Three and down ISPs, which will not be up to par to support the added burdens of higher traffic levels and decision times. And directory architectures, which are predicated on lookup methods that will soon present their own forms of bottlenecks, where they haven't already.

Again, it's not the intrinsic bandwidth that's lacking, it's the ports and the administrative methods being used to partition and groom that bandwidth to individual users, and their software-related dependencies.

The thing that must be decided upon at some point is: When is it time to begin cultivating a new paradigm of media technology, as opposed to perpetuating and prolonging the older models [which used to be characterized by George G. as the switched DS-Zero Cage syndrome].

It's beginning to appear (to me, at least) that the DS-0 will soon become supplanted in this dubious role, as fewer and fewer folks even use it anymore, by the older routing framework of the original Internet.

This would have been heresy, I know, 18 months ago, but heck... this is after all, WebTime we're in.

The latter router model I speak to simply gets bigger and more entrenched every day, without regard to the newer bandwidth-multiplying measures taking place in the access networks, and the correspondingly higher user speeds that they enable.

Ultimately, the older model becomes more difficult to modify and improve, or dare I even suggest, bring about change to, as time goes by. The Internet is entrenching itself for the long term, preparing itself for the role of a newly nominated anachronism of the future.

Lest we forget, IP as we know it today, best effort and all notwithstanding (or primarily due to it), has its problems too. Otherwise, the IETF would not be working as hard as it is to give it more deterministic attributes as newer services are assessed and force-fitted into it, where none were previously thought necessary.

And perhaps the revolt I'm speaking about is already taking place at the IETF level. From within. But even the Integrated Services Initiative [IntServ] and Differentiated Services [DiffServ] plans are still nowhere near what an all-optical break-away would mean in terms of a truly revolutionary move. At least not yet. Stay tuned...