Eric, pardon the interruption, but regarding your question:
"What in your vision is the value added of @home? "
Not to get visions confused, but from my perspective this can best be answered at the current time simply as:
It's got only to do with the end user's nervous impatience and insatiable appetite for speed and convenience, and nothing more. These remain the key reasons for the current uptake and the long backlogs of installations.
Keep in mind my qualifier: at the current time, although I have reason to suspect that that may be all, for a while.
We're dealing with Internet Time here which means torridly rapid development and delivery cycles by others, so anything that ATHM portends for the future can be blown away with a new implementation of some widgetry or alliance, almost as quickly as you've had an opportunity to reply to this. Only an ostrich can deny this and get away with it.
As far as future payoffs are concerned, bounties based on the fortunes of faith remain the value addeds you're looking for, I think.
Both ATHM and RR have just as good a chance, currently, and for the next year or two, at being proved the solution of the moment, more so than others... due to the head starts they enjoy. Maybe they'll still consolidate, but to do so would be to throw a log jam in their evolutionary paths that would have to be cleared, since integrations of this sort take time and lots of resources. Not to mention the disruptions that users would experience, especially those being converted, and the ensuing defections caused by yet another cause of annoyance while newer alternatives take shape.
I'm always aware of the kid who is coming up from behind with a faster, more economical idea, especially if they are promoting photonic delivery. Times to market for such ideas used to be measured in decades. Today, I think we're talking about a couple of years to beta, and then rapid modifications in software or swapouts in micromodular chip sets, from that point out.
The problem with investing in any tech company's futures is that technological innovations converge and overtake what were thought to be future deliverables in a way that tends to render those expectations moot, even obsolete, very quickly. It's a fact of Internet life that some will learn the hard way, one which has already chalked up a long list of homicides. Innovatum homicidium.
Some problems, a major set of problems, faced by the cable modem regime have to do with extremely long architectural and design cycles, and keeping the model fresh. Can't do it. In the current cableco HFC environment, there is no such thing, because rollouts tend to be multiple years in the standards approval phase, and delivery phases are almost as long, if not longer, without the possibility of substantial changes to underlying fabric. I think that it's fair to say that T is acquiring, inheriting, something that they would never have designed themselves. They cannot even quite figure out how best to converge it to the next gen anything right now. Witness their reneging on the VoIP initiative the other day, after all of the fanfare they published of same. T sees HFC for its high real estate value in geographic areas that they would otherwise be precluded from entering at this time. They do not view HFC for its technological merit.
DOCSIS may be a solution to the upgradeability issue, but only within the restrictions of the larger model of current HFC. And look how long it's taking for the DOCSIS standards to be approved, implemented and certified at the individual vendor level. When a vendor meets certification criteria, it's cause for celebration and a flurry of new releases. Now we're reading that it will be April of 2000 before broad-based implementations take place.
In order to put in significant changes to the larger HFC model (the one that currently assigns only a pittance of bandwidth to the cable modem space) one would have to go back to square one, and start the process virtually all over again. We're dealing with a basic design anomaly in cable that just happens to still yield a better solution, at this time, than other currently available options. But this kind of hegemony is only temporary, and doesn't have a long life cycle associated with it without significant changes to the manner in which it assigns bandwidth.
There are thousands of tons of coaxial connectors, field amplifiers and converters, and black cable still out there, on the tail ends of what are, in my opinion, archaically designed fiber backbones, which will prove these assertions correct, before long.
This is an area that will be ripe for assault in another year or two, IMO, possibly by the power companies in alliance with some smart guys, if they ever wake up, or by some combination of wireless-vdsl using fiber backbones to neighborhoods.
One may deduce from this that realized future payoffs from current designs will not materialize as we expect that they will, not with the same advantages that we have assigned to them in the past and at this time.
There will be formidable competition, in other words, from other sectors that will reduce the relative advantage that these cable ISPs now enjoy. I don't mean to denigrate or belittle the excellent work that has been done by the engineers who have successfully made something out of nothing, as the CableLabs folks have succeeded in doing with black cable in electronically noisy environments. But while the work that has gone into HFC is commendable, it just isn't extensible or flexible enough to accommodate the future... even for a single SP, much less multiple SPs.
@ShieldsUp, Frank Coluccio |