Hiram, I'm in a writative mood this evening, so don't take the following run on of thoughts as anything other than a sounding board exercise. And hopefully, an attempt to answer your questions, if you can get past the bull. Also, this post has a secondary purpose associated with some friends' needs on another board. I'm looking to go through the gauntlet on this one, for the learning experience. Denver Techie, you hearing this?
Warning, there are no gaurantees here, so read on at your own peril ;-) ----
Your question interested me, since I'm only getting back into the cable techs now for the first time since it was something else entirely, just a couple of years ago. I've been following Denver's lead on many of the hard issues, along with many of Dave H's on the soft issues (hey guys, nothing personal there), and I'm only recently beginning to put some back into this thing again. I guess the ILECs were not the only ones shaken by all of the recent news surrounding thr progress being made by the MSOs and their cable deployments. ----
Let me address your points out of sequence, if I may. You asked a question that makes some common assumptions, but IMO those assumptions are based on some principles that exist purely in the realm of theoretical, statistical depiction. For example:
"... what are the chances that all 125 subs are on at the same time, and receiving information simultaneously?
These kinds of exercises have a place in engineering for their theoretical qualities, in order to characterize the steady state of a system, for bench marking purposes, but they are not realistic and have little place when assessing real world contention issues.
I believe this to be so, in part because they assume that users are actually sharing a line at some point, or that all users are sharing the same line at the same time, in a synchronized manner. Which, in itself, would be a myth, even if it were unsynchronized, because it's a physical impossibility to do, the way things are currently engineered.
Explanation: On today's HFC systems, head end buffers are the media that are used for governing flows and sharing, not the line itself, with the exception of the limitation it presents. The channels on most downstream system paths, unless multiple 6 MHz channels are already being used (and even then they would be considered separate paths), can only manage the transport of one user's payload at a time. Multiple users, in turn, are managed in a serial fashion, or on windowed interrupts, but serially, nonetheless. That is, they travel one at a time.
Perhaps in the upstream direction, over separate sub-channels, the line could be said to share, but even in that situation they are separated by space (separate channels are used). More often than not there are several channels sharing the burden in the upstream direction, although they utilize less spectrum and operate at lower line speeds than the 6 MHz downstream channel operating at 10, 27 or 38 Mb/s, or higher.
It's interesting to note here that in the downstream direction, which is what you are actually referring to, only one message can be traversing the line at any one point in time. This is in contrast to some very common notions that collisions actually take place on the HFC line itself, between the residence and the head end. Not so. Much to the chagrin of xDSL pundits when they learn of this. Perhaps in earlier vintage systems which used strictly IEEE Ethernet system architectures, this was true. But not in the majority of today's HFCs which use a modified version of Ethernet which has been specifically tailored for cable modem operation. ----
Let's examine what happens with two different user groups using two drastically different application profiles.
One user group of 20 professionals is concerned with telecommuting activities, SOHO. They are pulling down fairly large data base files. The other group consists of 100 couch muffins who are surfing the web.
The 20 users send requests for database files, each of which is rated at 200 MB when fully downloaded, and the requests are sent more or less at the same time. They will cause a 4 GB rush of data to be pulled down from remote servers, and then queued in the head end buffers initially, to be bled out onto the line to the twenty residences.
This 4 GB load would be a larger burden to the overall system than if the 100 web users requested simple mail or bulletin board downloads, each weighing in at 100 kB, or a cumulative load of 10 MB.
So, we have a situation here where 20 users represent four hundred (400) times more of a burden than the much larger group of 100 users. And in both scenarios we viewed all members in these groups hitting the enter keys, more or less, at the same time.
These are actually lousy examples, because real loading approximations would take into account a lot more information, like the frequency of requests, distributions of file sizes and holding times, average number of hops to remote targets, upstream line speeds to peers and NAPs, variability of segment population on line, etc. It's a very long list.
In the case of the professionals, on a system rated at 27 Mb/s, would anyone care to guess how long the last user had to wait in order to receive their last slice of download? ---
Like you say, all users don't hit the enter key at the same time. But at some point, when buffers are already significantly backed up with data, the differences in times of execution don't really make all that much of a difference, anyway. Everything being pulled down from the server, or local cache, is already going directly to buffer, first, being queued for dumping out onto the HFC.
If a moderate number of small payloads is all there is, no one notices a delay at all, and we get a lot of pep talk on the stock boards about how well the new toy works. As payload sizes increase over time, along with the overall number of subscribers on line, so will the waiting times, and the inevitable degree of displeasure expressed.
Conceivably, it could take only one or two users to dominate the system entirely, taking it down to its knees, as it were, for everyone else, unless traffic monitoring and throttling controls are put in effect.
The latter potential problem highlights some of the perceived dilemmas associated with ATHM's waffling over their use policies, and their philosophies revolving around whether or not to allow SO/HO activities and streaming multimedia downloads. ----
Going back to your original bit-rate doubling question, where you've increased the QAM rate and the resulting bit rate from 19 to 38 Mb/s, we'll ignore the obvious breakage where traffic overhead and management penalties are concerned. Beyond that, one cannot extrapolate the kinds of results you're looking for, purely on the basis of linear function. It doesn't work that way, due to other inescapable factors which are nonlinear in nature, and which account for a good portion of cable system bottle-necking.
These factors exist at the head end, in the pipeline, at the target server locations, and at the user locations, and are affected by some not so obvious conditions at times, not the least of which may stem from the user's internal machine resources, or lack thereof. Not to mention the infinite number of variabilities associated with TCP/IP protocol itself, whose randomly dynamic qualities often defy characterization.
Propagation times in each direction (and the number of times transmissions change direction) of a transaction profile must also be taken into account, as well as back-off and retransmission times caused by errors detected during flows. ----
At best, doubling the fundamental bit rate might yield a 60% or 70% increase in overall realized throughput gain, and less, as the distances increase. Conversely, the gain may be more, as the distances are shortened. You could probably approach the theoretical results you are seeking if you made both sides, up and down, symmetrical, and placed everything on a laboratory table, I suppose. But not in the real world.
Another factor that is often overlooked is the effect of the much slower upstream path. Lest we forget, cable modems use asymmetrical speeds, almost exclusively (excepting certain modes used by CMS), just like ADSL. Most web transactions require multiple iterations of acknowledgments and other duplex interactions, making approximations of overall transaction times almost impossible to predict in a live environment.
Does your QAM rate doubling take into account the fact that the continued slower request times on the upstream path will indirectly slow down the overall times for each transaction as well? I suppose another question would be, does the bit rate doubling you're referring to actually work for both directions? Or is is only in the downstream?
These are some of the simpler and plain to see, real world considerations that must be accounted for. Perhaps one of our resident cable practitioners can give us a hand here. All comments and corrections welcome, as always.
Regards, Frank Coluccio |