Frank:
Re: HFC obsolescence vs. HFC dominance.
As a tech-head I have to admit that I feel a certain level of empathy for the design and engineering staff. A tremendous amount of time (as you noted) and work went into the HFC/DOCSIS model. As we contemplate a solution for the pending calamity we've come to call congestion, in the way of a Lightwire-like configuration (far fewer homes passed per node than current templates allow), we hear of cutbacks on HFC improvements by T.
This is only a temporary state of affairs, they've stated. I'm not so sure about that. If they lose their momentum and their drive to do the right thing now, given T's present financial and structural dilemmas, upcoming quarters will be no less demanding and they will forego those improvements indefinitely. Especially if they get their initial subscriber numbers posted before the real crunch hits. Then they've got you.
I can personally attest to the amount of work that has gone into HFC/DOCSIS/PacketCable by over 200 vendors and 80% of the major service providers- particularly in the area of secuirty and QoS- it is possible that they are all wrong, or misguided, but at the moment, they seem to have produced the preferred solution for broadband residential data and broadcast video. We will see about voice, VOD, and video conferencing in the next round of battle.
Also, the problems with T seem to be that Mike Armstrong was, in the end, a sales man more than an executive leader. I head him speak about the original strategy, and was impressed; I still think it was a good strategy. His problem first was not related to HFC, but to the quickly declining revenue in long distance- which he himself predicted. What problems he has he generated himself- rather like George Bush's 'read my lips, no more taxes', he has lost credibility by reversing course, not by the strategy itself.
I'm already contemplating the ways in which the MSOs will resort to a 10GbE approach, probably by using some of those spare singlemode fibers in the distribution cable between the Head End and the field node. Of course, that's not where the real problems are. The root of HFC's problems, as it is now constituted, stem from a near-obsolete RF modulation scheme and black coax, in that order, and a spectrum plan that must, by design, match.
I've seen no evidence of this in the Cablelabs work. In fact, the thrust there is to go international with their current standards; I expect the ITU to adapt them in the future, and the standards, with the usual
It is true the RF modulation scheme was constrained by backward compatibility requirement, but it is not clear that 10G would solve the problem. Business constraints would require that the fiber be 'shared' just as with HFC, but using the collision detection version of the 10G protocol, not the point to point versions. Thus you are worse off from an access standpoint than using the point-to-point DOCSIS protocol, since CD has a much lower effective utilization. Of course, there are also interesting problems of broadcast video and compatibility to solve for 10G- a major advantage to HFC.
The real problem with any financially practical residential broadcast scheme, in the end, is upstream point to point traffic overload. The real challenge with HFC is to handle voice for all the homes that will want it. If they can't, they lose potential revenue they might otherwise take form the phone companies, and will need to look at alternate architectures, or using reserved bandwidth for upstream transport. The lost of some potential phone revenue doesn't seem a strong enough reason for cable providers to move to a new technology. |