Unless I am misreading you and others here who have responded to my views on program delivery, you seem to be implying that this "problem" can be characterized as though it were only a speed bump that could be ameliorated by some quick fix in system economics some way. I think that the problem runs much deeper than that.
The problem, when it exists, stems from a particular cableco's architecture, and the gamble that a collision-domain approach to the delivery of ever-increasing volumes of high-speed data will suffice over the long term. And I use the term "gamble" here after due consideration.
An adequate assessment of just what "high speed" actually is, and what volume of bytes we're talking about in sum, are the real issues. It gets to the point when we need to ask what limits the cableco is designing around, and rest assured that every architecture has limits. What are the providers' [both facilities-based and the ISPs who elect to use them] expectations in the space they elect to fill?
If this Co or any other ISP who intends delivering content over cable establishes their parameters to reflect traditional Internet throughput using common, everyday "surfing" metrics [ as defined by 33 or 56 k modems, or even 50 to 100 times that amount] in dense, closely shared topologies, it will be doomed to reaching a premature limit just as dial up modems have today, and Morse Telegraph, before.
ISPs must ensure that their facilities-based providers have a migration strategy to allow for orders of magnitude increases in throughput in the future. Otherwise, if they set their sites too low, they will be left to deliver yesterday's URLs at some point.
Invariably, the choke points are still the same old mundane issues of processing speeds in the field nodes and head ends: in their modulators, routers and switches; along with the basic topology design, equally important. I.e., shared bus/segment v. dedicated media, and the chosen protocols which ride atop the raw facilities.
This whole argument was rendered "long in the tooth" to me by many users recently, due to their sudden elations upon their first-time experiences of blisteringly rapid downloads of large files. While these happy surfers were alone on their segments for the time being, such would continue to be the case.
The irony here for cablemedem is that the draw which results from this benefit is its own worst enemy in most of its current deployments, because as soon as the uptake reaches sizable numbers due to "other" hungry users seeking the same benefits, the reverse experience becomes more likely. I see this following the path that T1s and T3s have followed in corporate LANs. At first, they made a big difference (and still do to a large extent, compared to alternatives), but their relative effects have been compromised with greater and more pervasive use of the Internet each day, both public and VPN. Many firms are now increasing their initial T1 pipes to fractional or full T3s. And some go far beyond a single T3.
Getting back on point with respect to cablemodem, user uptake on these systems and the ensuing throughput rates, are inversely proportionate.
Fundamental architectural remedies are possible, but they are certainly not cheap. I think that newer systems going in over the next several years will take the foregoing into account.
IMO, for some operators of currently deployed systems (and even many of those now coming into their own), this will amount to more than a speed bump, to re-borrow your term. More like a brick wall when it comes to setting expectations on what can and cannot be delivered.
Of course, there are those operators who have actually taken the time and endured the capital expense of designing for greater throughput into the next tiers of mostly digital multi-media services which I reference below.
The degree of available head room on the system is the area that needs to be addressed and properly assessed and designed around.
I can hardly believe that we are talking about the turnpike effect once again, at this progressed stage when 10 Mbps to the home is now a reality. But it is a principle, I have to remind myself, that knows no known limits yet. If it isn't push techs, then it's TV, or beyond that it will be interactive live video phoning. Does anyone here think that these are so far off into the future that they should not be taken into system design account, or that they are still meaningless at this time?
IMO... the non-compete issue for the sake of deference [due to charter, contractual, or otherwise] is a red herring, and the real reason for this ISP and others stating that they will not be entering the TV program space is the fact that their facilities-based providers', i.e., most of the cable operators', platforms will not be able to support it, under current circumstances. And the cablecos know this only too well, for they are the gatekeepers. This may cause one to wonder if these limitations are merely coincidences.
Comments and corrections are always welcome. |