SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : LAST MILE TECHNOLOGIES - Let's Discuss Them Here -- Ignore unavailable to you. Want to Upgrade?


To: DenverTechie who wrote (2479)12/7/1998 7:33:00 PM
From: RocketMan  Respond to of 12823
 
I'm not sure it was a typo as much as a Freudian SLIP :-)
I suppose my question was rhetorical, and I am in agreement with your response. I tend to think that last mile technologies, although they may provide an order of magnitude improvement to the typical user under ideal circumstances, will be held hostage to backbone improvements for the next couple of years.



To: DenverTechie who wrote (2479)12/7/1998 9:04:00 PM
From: Frank A. Coluccio  Read Replies (1) | Respond to of 12823
 
Denver, Rocket,

Some great discussion here. IMO, answers to these kinds of questions are only as good as one's ability to capture all of the particulars of any given link or set of links, and characterize those links along with all of their dependencies (inhibitors and enablers) up and down tiers, as would be defined at any point in time for any random TCP/IP or similar session. At least, that would be the case for an individual instance.

There are no generalizations that will remain true here, IMHO, but for the sake of discussion, higher-speed access lines will work to alternately (1) do benefit to the core, in some circumstances, as well as (2) threaten its solvency over time, by presenting it with far more traffic than it has ever experienced, or been prepared to handle, before.

If the core adapts to the new demands for downloads, well, there will be less of a problem than if it didn't.

In short, these questions do not lend themselves to a one-word or even a one-paragraph answer, again, in my opinion.

RocketMan, you questioned:

>>"But I don't understand how that gateway relieves the congestion at the national access points or other bottlenecks."<<

Often they don't, since that's the job primarily of the core network designer, of which, ironically there is none. Or there are far too many... whatever. These designers try to follow sets of rules of various consortia and the IETF, but there is no overall orchestration on the 'net, since that would be antithetical to its precepts.

At least that has been the historical party line of ISPs, stemming from the original edicts and cornerstone understandings of the Internet Society and its founders. This could be one of the Internet's biggest problems currently, IMO, while at the same time, and in some perverse and inexplicable way, one of its greatest attributes, thus far.

Chaos was once a good and enabling thing on the Internet.

--
Sometimes the higher-speed gateways and access lines actually do help to alleviate congestion in the core, when collateral conditions allow. In those scenarios, queues in server farms and switches & routers, will flush more quickly if given a high-speed receiver downstream to act as an outlet for release.

In contrast, if pent-up downloads in servers and routers must wait on the parameters afforded by low-speed access lines, congestion will mount and reside internal to the cloud for far longer periods of time, leading to time outs and the escalating syndrome of effects that that involves. Namely re-transmissions of the same downloads multiple times, which only serves to exacerbate the situation further.

Accordinaly, such accumulation will work towards the detriment of the overall when download speeds, as a function of access line speed, are too slow and restrictive. More so than if end users were capable of receiving the downloads all at once, making more room in the core, as it were.

But even when higher speed access lines are good in this respect, for the moment, they will present a growing chore for the backbone once they begin to multiply in numbers, and present the potential for the core to undergo a form of implosion.

All of this points to the fact that as traffic levels increase due to any of the improvements we've been discussing in this thread (at any stage of a link's existence), there must be a balanced set of conditions in place, which requires a constant effort to "tune" the 'net and all of its constituent parts. IOW, there needs to be someone in charge of the design work... even if only done on a dynamic basis through a set of modeling and provisioning tools on a distributed basis. In a meteorological tracking sense, of course. That would help immensely. Don't you think? We're not that far from such an eventuality, IMO... since the weather stations are now in place to begin tracking the storms. How to re-act or pro-act to them? Well, that's another story.

---
No single line or mux will have the effect on the core that's been discussed here, rather it would be a case of "the sum effect" from all lines and access muxes, in accordance with the following explanation from Denver:

>>"If the backbone networks are not set up to support that bandwidth from multiple users and the DSLAMs are not engineered correctly, then all the bandwidth at the user interface won't do a thing for you."<<

... which represents the other side of the coin that I began to discuss in my opening paragraphs above.

But I must reiterate my point about the traffic awareness and the cognizance of orchestration indicators. Without overall purview and a set of controls by "someone," or some intelligent, automated even, entity, and the benefit of the right hand knowing what the left hand is doing, the only solution that will continue to work out to any degree is when everyone agrees to overbuild their capacity in multiple ways, just to ensure that they will be prepared to handle the unknown.

Not exactly a prescription borne out of engineering purity, but the only thing that'll work under the present set of rules... unless some providers decide to go it alone. Which is apparently what some larger players are now considering as a solution. This, once again, leaves the question concerning the smaller operators. What will they do without the benefit of the larger players making their backbones available to them? Anyone care to relate recent developments in this regard and who the affected parties are? I seem to recall a couple of articles lately concerning this, citing some of the Tier 1 ISPs threatening to close out some of the smaller ones.

---
BTW, what do you folks think about all of this talk about throughput on the 'net doubling every 100 days? Say what? Do you go for that line? What do you think?



To: DenverTechie who wrote (2479)12/8/1998 11:38:00 AM
From: lml  Respond to of 12823
 
Denver-T:

I believe the latest push is good, notwithstanding other bottlenecks that exist w/i the network. Any effort to extend a broader bandwidth to the home is going to benefit the Internet overall. With an increased focus on E-commerce, larger revenue projections will inspire further improvements to the overall network, particularly the other bottlenecks you speak of.

I do think improvement to the network will be a lock-step approach. First "this" then "that." Rome was not built in a day.