SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Frank Coluccio Technology Forum - ASAP -- Ignore unavailable to you. Want to Upgrade?


To: ftth who wrote (767)12/19/1999 2:48:00 AM
From: Frank A. Coluccio  Read Replies (1) | Respond to of 1782
 
True, I did state:

"I say this because I've yet to see anyone do this both successfully and consistently, in over thirty some odd years..."

What I meant by this was simply that organizations consistently under-provision themselves when it comes to meeting evolving traffic demands. If they don't do this in the wide area bandwidth sense, then they usually do it in other resource-starved areas. When "other areas" are starved it creates the illusion that there is actually sufficient bandwidth on the larger network to be satisfying current needs. But often, users may wind wind up suffering from poor response times due to other bottlenecks in servers, LAN segments and riser backbone capacity, or elsewhere. Very often, and I can't stress this too much, the major point of contention and the greatest cause of latency is in enterprise routers themselves, and the means by which those routers are (mis-)administered between geographically separated, or even in-building hierarchically-tiered, routers.

But in general, it's the overall tuning process across disparate workgroups that is missing, very often, workgroups who are usually found to be working across in-house political boundaries. This lack of inter-departmental communication is very often responsible for inadequate resource provisioning, either of bandwidth or somewhere else.

Now, if you could fathom this happening amongst individuals working for the same enterprise, then you should have no problem extrapolating what this means when the issue is the harmonization of needs assessments between competing ISPs, IXCs, Cablecos and incumbent exchange carriers.

"suppose we had physically separate global internets for each currently identified traffic class, could it be solved then? I think the answer is "not entirely" because that's just a different partitioning of the same QoS solution. Or is it? It's spatially separated as opposed to temporal separation, but do the net flow statistics improve?"

Your hypothetical network division is interesting. In a way, this is precisely how VoIP is evolving, or being implemented for the moment, now. Of course, Chambers, Sidgemore and Cerf say that in the future this wont be necessary because there will be plenty of room for measly 'ol voice to navigate unscathed over the public 'net like cockroaches, but the ITSPs are doing precisely this now.

The Internet Telephony Service Providers, by and large, are providing high quality voice over IP because this is precisely what they are doing. They are sending their voice traffic over private IP backbones, instead of sending it over the open Internet. Other examples can be found in the semi-VPNs that are now serving as the backbones for B2B ASPs and colo-bandwidth outfits. These, likewise, use PVCs over ATM or other virtual path technologies to channel critical users around the globe, instead of using the public 'net and subjecting them to the vagaries of the wild.

WebTV, ATHM, and others do likewise. They are using, for lack of a better term, VPNs. And in so doing, they avoid the incompatibility issues which plague pluralistic venues, so that they are then able to provide a homogeneous set of solutions which permit QoS and other service enhancing characteristics without the headaches associated with multi-vendor/multi-provider conflicts.

But how does a separate network differentiate itself from the main network? It can be done, as you suggest, spatially, or temporally, or virtually, too. And, yes. The net flow statistics do improve. Is price performance improved proportionate to the net flow stats to make it worth while? That would depend on the particulars and some value calls which could even turn out to be subjective ones.
----

"You eliminate the interference "between classes" but still have the interference "amongst a given class." The statistics within a given class would now be easier to bound, since they don't have any interference from the non-statistically-related 'other' classes anymore."

Yes, and we can borrow again from the VoIP example, above, to confirm this point. Note that if you are still having contention issues within the given class, then some tradeoffs must be assessed. You could either tier the service and have higher quality made available only through premium charging, and charging less for those who don't get it, or you can improve bandwidth to allow everyone equal quality, but in the process raising the costs to all by the same margin.

ITXC has an interesting way of solving this problem, one that is not unlike the solution we came up with three years ago for another VoIP network. They use three different types of facilities. One is the Public Internet. The next is a VPN, or private IP network. And the third is the PSTN for backup and overflow. Through constant monitoring and surveillance of these circuit groups they are able to route calls over the least expensive routes first, with increasing dependency on the more expensive routes as congestion and calling volumes dictate. There is a good graphic at their web site showing how this works. In our case we specified an additional layer of PRI routes (which are ISDN T1s On Demand) for an additional fallback and overflow, if and when needed.
----

The simultaneity issue wont be solved. Darwinian phenomena don't work that way, and neither do networks, hypothetical or otherwise. As you note, the system couldn't take the shock.

As for which way best to begin administering lambdas? We should only have those problems today. I've thought about this before, and short of an ITU or IETF working group on the topic, I don't know where to start. Theological leanings will play into this big time, to be sure. At the present time, TDM and Stat multiplexing costs are coming down to dirt levels. Silicon deeper into the network, even at the residence for this purpose, is not a far fetched proposition. But given the nature of optical, do we really want to start doing TDM or Stat muxing on the line itself? Or, should we be looking for the transparency attribute of optical in its purest form? Muxing ad multiple o-e/e-o conversions make optical opaque.

And as things stand today, if we depended on a fully transparent form of optical, then we would need to begin doing administration of localized lambdas. Take a cable operator's serving area. How many different lambda are now possible over a given strand of single mode? 64? 128? Did I read somewhere that NT now has the count up to 160? Lambda re-use across multiple groups of users, like frequency re-use in wireless, would be required. There are probably an ifinite number of ways to achieve this. I've not come across any, yet, that has been proposed for widespread acceptance. Much less for simultaneity.

But let us assume for a moment that in the end each residence had their own lambda or set of lambdas. What happens then to the demand placed on the upstream carriers and ISPs?

Then your question,

"I'm not implying this eliminates the need for QoS...just that it makes it an 'easier to bound' problem because you're sorting and policing groups of like-things rather that blobs that can contain anything. So this is just a different "hybrid" possibility: the greater BW of fiber and DWDM, and more easily bounded QoS."

A major sticking point before we get this far is the limitation on the number of linewidths, or lambdas, that can be extracted economically from a strand of singlemode. Assuming that some optical chip is perfected that can raise the current number of affordably-produced lambdas from a hundred or so to several thousand, and then some wavelength-specific filter is placed at each user location, then it might become possible to assign each user their own lambda with continuity of same all the way back to the head end or central office. Once again, your simultaneity issue arises, and the need to go through conversions. Unless, of course, your particular lambda hits an optical device first such as an optical cross-connect or optical router, and is forwarded in its native form from there.

Frank