Thanks for the CC #, Dave. Overall, I found it to be a very interesting call, and a bit more informative than I thought it would be, thanks. -----
I recommend that anyone interested in this technology listen to the conference call before it scrolls into oblivion. Again, the number is 888-566-0045 passcode 5689. -----
The call went pretty much like I guessed it would in my previous post. Some notes, along with some rambling thoughts which have occurred to me since listening to the call severeral hours ago:
- I think that it was McGinn who mentioned an overall $400 Million investment in all forms (cash, R&D, materials and products);
- TB will be using a four-quadrant (four sectors), 90 degrees each from any given buildig hub site to cover the ful 360, much like I recalled earlier in previous posts and assumed would be the case with TB, similar to the r-f digital termination service (DTS) approach of the 80s.
[[Begin sidebar: The DTS pilot in the late Eighties that I was familiar with used all four building rooftop corners of the WTC in NY City to service parts of Manhattan and parts of the "outer" boroughs. Line of site was a bitch to achieve for large numbers of "like target sites," as I recall. This was important because for multiple branch office applications (when many sites need to be serviced in an identical manner with identical provisions) enterprise network managers do not like to split up their single application types among different architectural models (hence multiple carriers providing dissimilar services). Instead, they prefer to preserve a single set of architectural rules and topologies. Although, they will seek to use multiple providers as backups to one another for the same architectural model.
I can recall assisting NY City in attempting to get all of their OTB (off-track betting) parlors aligned within a given DTS footprint as termination points on this network. A similar point to multipoint scheme existed (at DDS speeds of 19.2kb/s, 56kb/s and T1 rates, not Gb/s rates) using this r-f scheme which in many ways shared many of the same concerns as TB's p-mp scheme, and window shots were also allowed. The problems we faced as consultants were many in validating this model. In one instance, the problem had to do with where the target "offices" were located. Often they were in ground floor dwellings, or inner-concourse locations within the bowels of skyscrapers, which were blocked from outside view. The same exists for tenants on upper floors who are situated in the cores of buildings, as opposed to being situated along the outside walls. But the ground floors are where many opportunities exist, and where it's most difficult to satisfy. There are many chain stores and bank, brokerage, etc. branch office operations that fit this mold, for example. Bank branches, specifically happen to be very bandwidth intense. The only way to get to many of them is through a neighbor's premises, or by extending a cable through risers, or otherwise, to where an antenna or transceiver can be cited in an optimal way. End sidebar. Ultimately, however, where there is a will, there is a way.]]
- LU did a nice dance-around when questioned about how they would balance relationships with other service providers who were also customers of LU's wireless wares. A nervous dance, but a nice one;
- Hesse made some good points about the ability to add spectrum at will through the use of wdm, and the ability to effect spectrum re-use more easily and elegantly than other forms of wireless (rf) could;
- re the blinding snow effects on the Great Lakes, some more dancing. In some situations they will reach the point of diminishing returns for some regional/local "atmospherics." The question will be, will TB concede those less desirable markets when those conditions arise, or will they simply design and engineer higher densities of hubs per unit of coverage? Given that they will only seek to cover 100 markets in the next 4 years, then I suppose they can be rather selective in choosing their locations without having to lose face on this count.
- established fiber carriers (I heard both CLEC and long haul fiber carriers, can someone clarify this?) are slated to become operating partners and investors in TB. This makes sense. It wouldn't surprise me to find out at some point that some of the RF Wireless plays also elect to play. I may have mentioned this to you in the past, maybe not, but I once made a strong play to Teleport to use IR as a means of early service delivery for T1a and T3s (when fiber routes took months to dig up), and for disaster recovery purposes, only to have TCG reject the idea and go out and buy a microwave radio company, instead. [ Oy. ]
In some urban markets dark fiber has already been installed sufficiently, or is about to be spread sufficiently, to make over-(HYPER) deployments of IR in cases where there are poor atmospherics more expensive and riskier than they are worth, making it a better decision to leave those markets alone, altogether.
These types of situations would make for interesting tradeoff analyses. It's easy to see how TB will exert a tremendous amount of pressure on the new dark fiber carriers. Without an incentive like beating TerBeam to the punch, perhaps the dark fiber would "remain dark" and a lot more expensive for far longer periods of time.
With TB like technologies encroaching, however, perhaps dark fiber deployments will come about faster and more affordably.
Who ever would have thought that we would be talking about beating down dark fiber pricing and availability so soon in the game, before it's even had a chance to be fully deployed? --------
Now, something sticks its head up here that I must comment on. And that is, if the end users and the hubs which are supported by TB are able to communicate with one another at Gb speeds, and if transport between TB hubs from one hub to the next over LU's OpticAir links is taking place at super-gigabit speeds due to DWDM, then doesn't this new community of end-user subscriber activity which is hanging off the edge of the Internet need somewhere to go?
Who is going to pay for the upstream multi-gigabit (terabit?) links needed to connect these new service providers' bubbles of ubiquitous, abundant bandwidth to the larger Internet's core?
Who will be responsible for establishing these Internet routes, peering relationships and links to the ISP who these end users have been using all along? In the end, most ISPs still hand off to one another over wide area distances using T1s and T3s, some at the OC-n rates, and far fewer at IP over lambda or IP over Sonet.
Will TB have native IEEE Ethernet connectivity to each of the established ISPs that its (TB's) customers may choose to be connected to? This was a major point during the call: The fact that LANs use Ethernet, and that TeraBeam was ideally situated to take end user LAN payloads in their native Ethernet form and shift them to other end users' LANs in the same Ethernet protocols without having to change them to T1 and T3 speeds. I find this somewhat amusing, since most Network Access Point locations on the Internet themselves usually involve some form of protocol translation which require to and from T1s and T3s through digital cross connects or ATM devices. IP and Ethernet will pervade the TB cloud within its own zone of reach, but at some point it will need to convert many of those streams to conventional T1 and T3 rates.
That is, unless every end user is going to be connected to the larger cloud at Gb rates. Where the foreseeable future is concerned, I don't think so, not where the small- to mid-sized commercial user is concerned.
The TB spokesman stated that "dozens and dozens" of users could be satisfied as multipoint end-points off of a single hub, with each one being rated at 1 Gb/s. If we just consider that there might be 24 users per hub, and each of them were only operating at half throttle, then it would present a burden/load of some 10 to 12 Gb/s in the access part of the network which would be sucking capacity from, or pumping data towards, the edge and the core, and overall once again in reverse to the distant site, equating to an OC-192, or 10 Gb/S. Has anyone here priced out an OC-192 lately?
Of course, this becomes a ridiculous assessment from two different views. In one sense, when you consider that we are only talking about twenty-four window shots, and in actuality, there would be thousands of users supported per hub site. I'll let you you run the numbers yourself. And from another view, that very few small business office users would have a gig of data to send continuously such as I've presented above, at this time. Or, let's put it this way: They haven't until now, due to te costs of doing so.
The main point here is that despite TeraBeam's ability to affordably open up the "first mile" with gigabit speeds, someone still has to pay to get it to the "last mile," and that means negotiating access charges on the larger cloud (the edge and the core) as well as to the remote site. We've all been to this movie before, I'm sure. But when listening to the news concerning this renewed IR approach, it pays to repeat some of the basics, lest some old pieces of wisdom get lost in the excitement: The overall speed is limited by the slowest link in the path.
What we wind up doing here is this: Instead of making access to the 'net unencumbered --once and for all, through the creation of faster first mile links-- we simply wind up shifting the bottleneck and its associated costs elsewhere in the mix, once again. Does this sound familiar?
And here, I submit, is where the potential proverbial fly in this ointment exists. For, the "potential" speed of the end point to the first hub has little, if anything, to do with how much of the larger cloud one may enjoy, simply based on the potential of first mile access speed.
What we'll see here most likely is one of the same dynamics that will ultimately affect cable modem and dsl "realized speeds" (where it hasn't already) at some point, due to unforeseen usage patterns which were never taken into account six years ago when these platforms were conceived, and the inevitable bottlenecking that will ensue. Some of this, particularly in the residential consumer realm, will be due to carriers' and cable operators' unwillingness to adequately bolster bandwidth towards the core, once the bear has been trapped and certain thresholds of subscription have been achieved. For businesses, however, it will be somewhat different.
For businesses, it really will matter very little how fast your access line speed is to your user hub (except if you are connecting to another user on the same hub or to servers or caches within a domain which are immediately accessible to the same hub). Instead, what will matter is the Internet "port speed" that you subscribe to from your ISP. This is the overall scaling factor for your organization's access to the cloud (fractional T1, T1, Fractional T3, T3, 100 Mb/s Ethernet). In addition to the port speed, you must pay for an access line speed, and the wording in the associated service level agreement (SLA) or quality of service (QoS) levels which are specified, at the time of service subscription. Here, you might find committed information rates, bursting rates, etc. which define how much bandwidth you get, and under what conditions, per individual end point.
Let me put it in better perspective. A simple 1.5 Mb/s T1 port to the 'net can cost several thousand dollars, in addition to the cost of the line itself which goes between the subscriber and the ISP's POP. A T3 port to the 'net operating at 45 Mb/s can run between $35,000 and $60,000, typically. These, again, are the rates for cloud access and some ISP admin features, but not the dedicated subscriber line charges to get to the cloud (those line costs are extra).
What do you suppose a Gigabit port to the Internet would be worth? It's a ridiculous question, so I don't expect an answer, but it does demonstrate the point that I was making about the disparity between line rates in the first mile, and what that speed will ultimately be reduced to if you don't have similarly rated links throughout every part of the overall Internet connection.
Access to the larger cloud at like speeds (in this case, at Gb/s speeds) will ultimately demand that the end user pony up and pay the piper at yet another entry point. Otherwise, we talk about flow controls and bottelnecks, if gigabit speeds are actually required and expected across Internet connections, such as these parties (TB and LU) are implying is possible. -------
Looking at it from another perspective, however, reveals some other interesting possibilities. Being a part of this new kind of urban or suburban bubble (isolated cloud) may lead to some other kinds of architectures within large urban settings, akin to how ATHM has its own backbone and caching defined on their own (ATHM's), or any other isolated (island), network.
A lot of possibilities exist here, and they are not all entirely harmonious with the existing lay of the land, or with the current rules of engagement on the Internet.
Indeed, I see areas where disruptions and dislocations can easily take place, and not necessarily simply in the realm of supply/demand metrics, but in how architectures are defined and how work flows within enterprises might be altered, as well. Perhaps not as securely or reliably as could be achieved with fiber, but generally within the same set of considerations. -------
In TB's opening remarks, Hesse stated several times the dominance of IP and Ethernet in networking today. He drove that point home very hard in the beginning. Later on he mentioned how optics would bring the attributes of storage and processing together in ways that have never been possible before.
This struck me as rather curious, since he didn't mention whether Fibre Channel would play a part in the storage retrieval scheme or not, where TB was concerned. I tend to think not, as the introduction of yet another architectural consideration at this, especially one such as FC, would introduce several levels of complexity that TB would rather not have to deal with. Like they said, IP and Ethernet. Kiss, IMO, and damned the shorter frame sizes.
----- Then again, I recall clients running DES-encrypted HyperChannel between Mainframes and controllers over IR links at super Mb/s rates in the past. What do you think? Comments welcome.
FAC |