Kachina,
You've asked some legitimate questions, although your fallback position to the status quo, namely the abilities of newly-conceived single laser engines, and DWDM juke boxes [coined such due to the noise they make], are not all as robust as they are made out to be.
>> Namely - what is the real practical upgrade path for this technology? <<
I think that it is immense, over time, if it is able to produce what the company claims. The means by which it is achieved in the final product will undoubtedly need to be refined... thus far we've only seen their breadboard version. And even though we're taught to think in Internet time these days, the refinement process (pending their ability, again, to demonstrate) will very likely take a longer time than the shelf life [half-life?] of the devices we are now comparing it against.
>"Is a 2.5 multiplier for bandwidth worth the risk to a telecommunications company to buy from a new untested vendor?"<
I don't know where you are getting the 2.5 factor. Times what? Times 40 Gb/s? That's too arbitrary, and probably meaningless. I think that the factor would probably need to be beyond the now-very-popular threshold of 10, more like 100 or greater (why wait another generation for the next improvement to be demonstrated?).
I don't know what it is or will be exactly, but bandwidth is only a part of the gain. The remaining advantage lies in the ease of administering multiple baseband signals on the long-haul side of the equation. On the short haul (looking back into the central office and the various subordinate feeds that go to make up the aggregates), things are not so clear. Today, those lower order feeds are mostly SONET tributaries which have been groomed using digital cross connects, or DCSs a la TLAB, LU, NT, ALA and others.
The SR engine would need to either adapt to these conventions and fall into line with the others on the central office side (which means perpetuating the DCS model another twenty years), or its innovative principles must be used further inward, in the region of the extant SONET rates in order to maintain a homogeneous look and feel.
I don't think that the latter is likely to occur in absolute terms, for some of the reasons you've cited, but some compromise is fathomable to me.These characteristics were covered to some extent upstream several months ago.
Assuming that the improvement factor is somewhere in the area of between 10 and 100, or beyond, then my answer would be yes. If it were truly limited to 2.5, then no. But like I said, I see no reason to consider that region of gain.
>>Those two factors are key. Telecommunications companies need very long MTTF or else the ability to swap out very, very fast.<<
I speak to folks who run xLEC operations in Manhattan. The first several generations of 4- then- 8 and 16- lambda DWDMs were pathetic. MTTF was a joke. Many of the lambdas were held in reserve as "spares" (a much higher sparing percentage applies here than those which are used for dark fiber outages) due to quality issues and unsound concepts employed in their makeup. But, Im told, things have gotten better, there. The point is that one must crawl before walking, before running.
Some of the larger DWDMs have so many moving parts in them that they are a nuisance for their noise and rattle. Yet these are the devices against which we now compare alternative approaches.
The issue I've had with the thread has been that we've spent more time arguing the merits of unprovable theories, that we have lost sight of the potential practical implications of the device.
>>The octave rule says that with a 10X price/performance inmprovement you win over installed technology with a track record.<<
That was the point I was making above. My only deviation would be that sometimes you have to get off the tried-and-true track in order to achieve the greater rewards. Nothing new there. |