Mike, great questions. I like QWST too. But if you load up a truckful of QWSTs, IIXCs, MFNXs, GLBLXs, LVLTs, GSTXs, and the new-found desires on the parts of the incumbent IXCs to unleash new DWDM capacity, where does all of this capacity go to? It's great to have terabit capacities on each and every individual strand in the long haul routs, but traffic stuck in the core is traffic bound, and this only results in further accentuating the effects of the last mile bottleneck.
I just received word from a client this evening that the two largest Interexchange Carriers are responsible for them putting off delivering one of their T3s for an imaging application due to their collective inability to deliver additional capacity at this time to the subscriber location (a major bank!). The two carriers are equally to blame, since one has been late in providing the long haul, and the other through its local services division is late in providing the end section to the subscriber.
The client will have to wait another four weeks for a pipe that is already two months overdue (i.e., it's already been five months instead of three), because the long haul IXC itself originally had capacity problems in their switching and digital cross connect fabric. Oy!
Back to the bandwidth "glut," however, when a select handful of users becomes last-mile- enabled, the flows begin to increase in their direction and the universe of web applications begins to catch on and increase their multimedia content handling capabilities for this class of users (usually business grade or affluent through some other means of measure), and hence, the requirements imposed on ALL end users tends to increase accordingly. This is nothing new, of course, and it is similar to the way that it's happened with every upgrade in browsers on the desktop... only those browsers, unlike bandwidth, are free!
What, then, happens to the remaining masses whose loops and browsers are still stuck in the mud at even 56 k and ISDN speeds? We are accustomed to the comparison where owning a Pentium versus a 386 was once the measure of one's ability to navigate or _not_, the new measure will likely be loop speed related.
The new abundances in the cores of carriers' networks cannot be absorbed by the applications community (subscribers and second and third tier ISPs) quickly enough in order to make much of that new capacity useful, much less overnight, because there simply isn't enough onramp/offramp capacity (i.e., ports) to funnel it to where it has to go.
They are the "vendors" of hardware and software who produce the much needed ports (and I use the term ports very loosely here to denote both hardware and software attributes in network elements) in this symbiotic (or mutually self-defeating, depending on one's circumstances) situation.
Also, just think about what happens to an ISP's infrastructure once they are suddenly looking at log increases of affordable bandwidth potential, and they get swamped by their customers to begin providing enhanced multi-media rich content and services.
Not the lease of which to consider: many of the desktops in use today will not be adequate to process flows which are a thousand times faster than today's (while still being able to handle processes in the background as they do now), unless they are upgraded or replaced. Opportunities.
On the carrier front itself, it's difficult to answer your "where would you invest" question, unless you specify whether this new form of imminent competition you've perceived is going to be unrestricted (totally de-regged) or if it will be the "handcuffed" variety of competition that we have all become too accustomed to.
Excellent questions, though... And I'm sure someone else has a better analysis than mine from a fundamentals perspective...
Regards, Frank Coluccio |