SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Frank Coluccio Technology Forum - ASAP -- Ignore unavailable to you. Want to Upgrade?


To: Frank A. Coluccio who wrote (751)12/17/1999 1:48:00 PM
From: Jay Lowe  Read Replies (2) | Respond to of 1782
 
>> the general citing of "guality of service" attributes,
>> purely on a subject assessment basis, often becomes
>> confused with the specific QoS standards in ATMspeak
>> (particularly as relates to constant bit rate services, or CBR)

I take this you mean, among other things, that clients and servers can emit QoS bids all day long but if the intervening nodes on the path ignore the RSVP frames then the QoS will not be forthcoming.

This is in fact currently the case on the heterogeneous internet.

Currently the QoS specifications provide a framework within which subnet administrators may implement real (subjective) QoS. But QoS as an interoperating property across any/all nodes in an arbirary web is not a reality.

>> Beyond this confusion is the philosophical tensions which exist within IETF circles and the established ITU following as to how best to satisfy the subjective sense. One way is through the use of controls as spelled out in the RFCs and ATM standards, and the other is to simply supply more bandwidth.

Is this the quality .vs. quantity argument? i.e., Why spend resources attempting to manage a marginal resource? Just get more resource.

Toward the core, this may valid.

Toward the edge, this is obviously invalid. Edge providers (DSL, cable, wireless) are not and will not be anytime soon in a position to supply virtually free capacity. If we arrange resource providers on a scale of capacity cost, we might see:

Lost cost
- server providers
- backbone links
- inner edge providers (routers, switches)
- outer edge providers (DSL, cable, wireless)
High Cost

For a given high-demand application (streaming video, etc), it's very cheap to provide more server capacity but extremely expensive to provide the last mile.

This is currently true. has always been true, and is not likely to change ... because the cost to supply the user-end connection is a multiple of users ... the more inward you go, the lower the number-of-connections multiple.

My point is that to get a QoS'd path from A to B means that all the intervening players have to be willing to guarantee it. So since QoS is required at the edge to REALLY deliver on the broadband app promise, then QoS will also be required seamlessly all the way to the server.

>> they will have an extreme dulling effect on the experimentation and innovation

Oh, I dunno ... give them a slice of the pie and let them encompass it within their navel-gazing micro-optimization world view. I don't see how their "requirement" for aggressive QoS population actually affects the rest of the world. Does it?