SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : The New Qualcomm - a S&P500 company -- Ignore unavailable to you. Want to Upgrade?


To: Maurice Winn who wrote (3258)11/14/1999 10:30:00 PM
From: Wyätt Gwyön  Respond to of 13582
 
Eventually, perhaps what will happen is along the lines of what telcos do--available bit rate, guaranteed bit rate, variable bit rate, capping out at 2.5 Mbps or whatever the interface will support. That might be more applicable for commercial users, or fixed users. For consumers, I'd think they will just have variable bit rate over a wide span until the application market becomes more defined. My cable modem is all over the place--sometimes 1.5 Mbps, sometimes 200 kbps. I only pay one rate, though. It works, because it's so much better than a dial-up connection that consumers don't care about the variability. Also, most Internet stuff is still oriented toward slower connections, so even 200 or 300 Kbps can seem pretty zippy if you're coming from 43Kbps. Why would we expect more from an air interface? I think dials to switch your bit-rate plan on the fly are probably not going to be included in "Version 1". Just a flat rate, for "best effort service". The selling points are:
* A helluva lot faster than your POTS line, for the same price
* You can be mobile



To: Maurice Winn who wrote (3258)11/15/1999 12:36:00 AM
From: Clarksterh  Read Replies (2) | Respond to of 13582
 
Maurice - Some random comments in the interest of exciting conversation:

First, I think you are missing one of the main points when it comes to data and delay. Believe it or not, the proximate issue isn't what delay you, as a human, can put up with. Its what the existing apps can deal with. For instance, long delays often imply large variation, and many protocols, such and video and fax, aren't very good at dealing with such variation.

Second, while it is certainly true that many modern packet systems work on a system of priorities, you shouldn't expect a small price differential for Variable Bit Rate - non real time (VBR-nrt) vs VBR-rt. This is true for a variety of reasons, but among them are:

1) Typically rt users also want tightly controlled variation in delay.

2) The combination of small absolute delays and low variability often means that the transmission medium must go with some un-used space so that when all of those VBR-rt happen to burst at once, ... . Now you might think that you could just load up the resulting empty spaces with low priority traffic, but then what do you do with the them when everything is being used. Queue it up? ... but this then results in weird network control issues (ever watched traffic on a freeway - why does it always move in clots? - Imagine the complexity in managing a freeway system where the high priority drivers can coopt the whole freeway.)

Bottom line - it is certainly reasonable to expect pricing differential based on priority, but don't expect it to be a small one. I'd guess a factor of many (4 or 5 (A WAG)) times from the lowest to highest priority. At that differential, I'd prefer to wait the 1/2 second unless the app just couldn't deal with it.

Clark

PS I know just enough about this at this point to be dangerous. So take the above with a grain of salt.



To: Maurice Winn who wrote (3258)11/15/1999 9:37:00 AM
From: engineer  Read Replies (1) | Respond to of 13582
 
KInd of like I could just have a little meter on my car which allowed me to pay $50 a mile and I could just turn up my speed to 160 mph? then all the people would have to get off the highway while I sped along using up the whole thing.

This analogy is EXACTLY the saem for this service. By making a few system assumptions, you allow all users to go faster with a little delay. I claim you cannot tell the difference between 100ms delay in you server acessing disk and 100ms delay in the etwork delivering your packet. the service you describe would have people pushing the limit up all theway wihtout understanding what it does to the system and it would destroy the system.

I used to run a network where I could dial in priority for users based on the project importance. Afterawhile, everyone was trying to ruin at priority 1 and the whole thing didn't work at all.