To: Frank A. Coluccio who wrote (3668 ) 5/12/1999 6:35:00 AM From: E. Davies Read Replies (1) | Respond to of 12823
Frank, Your efforts to clear up inaccuracies is appreciated. However I think you may be trying just a tad too hard and are taking things a bit too literally."... what are the chances that all 125 subs are on at the same time, and receiving information simultaneously? These kinds of exercises have a place in engineering for their theoretical qualities, in order to characterize the steady state of a system, for bench marking purposes, but they are not realistic and have little place when assessing real world contention issues. Whew. It was a rhetorical question! Hiram had just calculated the absolute worst case and said "how likely is that anyway".I believe this to be so, in part because they assume that users are actually sharing a line at some point, or that all users are sharing the same line at the same time, in a synchronized manner Of course the lines are being shared! Its a matter of time scale. What you are doing is the same as looking at a TV picture and saying that the picture is not really there because its actually a single electron beam scanning over the screen.At best, doubling the fundamental bit rate might yield a 60% or 70% increase in overall realized throughput gain, and less, as the distances increase. Fortunately this is not true in the times when it counts the most, large numbers of users using streaming media or downloading large files, then the scaling is nearly linear with bit rate and independant of distance. Does your QAM rate doubling take into account the fact that the continued slower request times on the upstream path will indirectly slow down the overall times for each transaction as well? Even doubling the upstream path rate will do very little to help either large downloads or web page access times. Far more significant than the time for a request to cover the "first mile" is the time it takes to get all the way to the server on the other end of the country, have the request processed, and get the response back across the country again. This needs to be done for the initial handshake and for each object on the page. http is stunningly inefficient in how it handles download of web pages, a single page can easily take a dozen seperate transactions. Eric