Hi Ossy,
Agreed, in principle. As one contemplates what all of this means, it's important to take into account the differences between the scale and functionality, and the means by which connectivity is achieved, of a wireless base station and that of a main fiber node that feeds not only voice services _and_ remote wireless base stations, but broadband data services, as well, in metro scale business districts.
The allure of a distributed architecture design yields to economies of scale at some point, when logistics and logical network node sitings are considered. In general, the more 'logical' a process is the easier it is to distribute it. And the more physical it is in nature, the more it looks to be centralized, although, as we have seen, even these are subject to cyclical phenomena.
IP voice, for example, is best satisfied with distributed servers that can be placed just about anywhere, within reason. Route nodes that comprise fiber facilities and DWDM muxes, on the other hand, look for economies of scale and seek a nexus-like convergence on a more-physical footing that is best satisfied by shortest-distance connectivity. This includes peering and NAP-like functions, too. And until things change, which would mean more sparsely populated routes becoming filled, this will usually mean that all roads will continue to lead to Rome.
I recently attempted to answer some questions on another board about some of these, and related, issues, which I've posted below. I invite comments and corrections:
-------begin snip:
You've asked some very good questions. Many of these cannot be answered in binary yes-no fashion, since central offices of different size and roles will dictate how they and their network elements are configured.
“Some things that these pictures cause me to wonder about: CO = one SONET ring node = one 5E ESS switch ?”
A good example of what I've just stated, above. It’s conceivable that a rural, or very small urban, central office might resemble what you’ve asked about, but not likely. There is no one-to-one linkage between a Class 5 central office such as the LU 5E SSS (Nortel’s DMS100 and Siemens EWSD are others) and the number or connectivity to SONET Rings, as I’ll go on to explain below:
“Does the 5E ESS perform the SONET ring node functions (OADM, restoration), or is that done by a dedicated "SONET node" device?”
While some Class 5s have T3 and optical OC-n line cards, and can perform some rudimentary mapping functions, the majority of OADM/Restoration/Tributary Mapping functions are performed externally in digital cross-connect systems (DCSs), also sometimes referred to as DACSes (Digital Access and Cross Connect System, which was trademarked by AT&T’s Western Electric unit prior to divestiture). These serve to ‘front end’ the voice switching systems, and it is the DCS, which ultimately “terminates” and “grooms” the flows that are supported by SONET Rings. Some of this groomed traffic is sent on to the voice switches, some is passed along on the transport network to other Class 5 offices, and some flows (tributaries) are mapped to long haul, interstate, inter-LATA, facilities.
“A 5E ESS switch is just a computer with an I/O architecture and an OS and apps customized for managing (making, breaking, monitoring, billing) 64Kbit/s connections or aggregates thereof, right. And it does this using the control / signaling protocol / standard known as SS7, right.”
Yes, essentially, those are the key elements that are at play.
“Since a SONET node presumably only needs one fiber cable each to the preceding node and to the following node, I assume that the bulk of the cables emanating from a CO would have to be access cables---either copper twisted wire pairs (carrying analog signals to/from individual customer) or fiber (carrying multiple 64 Kbit/s signals TDM'ed to/from some sort of distribution mux/demux box/switch downstream of the CO, closer to the customers), right.”
What you’ve stated is possible, and may very well be the case in most instances. I’d like to point out, however, that most ILEC/CLEC SONETs don’t employ one fiber cable. To break it down, many transport level SONETs actually employ four strands (two hot, two standby), and they might be distributed over one, two, or more, cables, for reliability purposes.
“What makes a CO so big (that it would require an entire building)? Surely not the one fiber cable each to the preceding and following SONET node, or the SONET OADM box, so it must be: 1) The boxes to which the access lines connect? 2) The 5E ESS switch whose size would be proportional to the size of the access network? 3) Or is it not so much equipment but people---the humans [and their tools, terminals, etc.] required for the care and feeding of the 5E ESS and other lesser switches, and for administration / paperwork / bureaucracy?”
That's a timely question, which caused me to reflect on the many counteracting forces that are currently at play. The demands on central office space vary sometimes in non-intuitive ways.
First, while footprints of switching machines have come down over time due to micro-miniaturization, their number of connections has increased disproportionately resulting in an offsetting effect. Not entirely, but this dynamic does exist.
Secondly, while very-fat twisted pair cables (some of which contain upwards of 3,600 pairs) in the underground - and in aerial cables, which are run overhead from pole to pole - are rapidly being replaced by optical cables with cross-sectional diameters that are one-ten-thousandth of older copper equivalents, the newer optical model makes enormous demands on central office space, power and air conditioning.
Thirdly, there are other offsetting effects that we tend not to think of, and these have to do with the mandatory co-location of competing carriers’ hardware platforms and interconnects that are a rather recent phenomenon, which also eats into CO space, power and air conditioning supplies in ways that could actually reverse (and in some cases already have reversed) any space and power advantages gained through miniaturization. In this there also exists the need for territorial demarcations, guard zones for security purposes, and other ‘foreign entity’ accommodations that need to be addressed that were never an issue before.
Hence, a new set of requirements that heretofore required zero real estate in earlier times. All of these latter considerations collectively could be lumped into the category of ‘administration’ that you referenced in your question, in addition to the usual level of CO administration, which is surprisingly sparse from a real estate perspective, contrary to what most would think.
So, you have this burgeoning level of new traffic being enabled by optical and the Internet, in general, and a concomitant increase in the amount of central office space required at a time when micro-miniaturization would be suggesting that these physical parameters would actually be decreasing. To some degree, they actually are, but not as fast as VLSI would suggest on its own.
“Would an IP world (were it ever to come to pass) have any fundamental advantage over the telephony world in equipment space and administration overhead required for whatever the CO-equivalent structure would be [presumably it would be some sort of GigE-over-WDM MAN "node"] ?”
Central offices “are” [albeit, slowly] migrating to an IP world, very often by way of IP over ATM. All of the larger ILECs now have gateway relationships with VoIP outfits, mostly with international Internet telephony service providers, or ITSPs, and clearinghouses and settlement hubs. Some have already subbed work out to softswitch entities to offload some of their modem traffic, and now appear perched to do some procurement of adjunct boxes that they would bring in-house, as well. The larger LECs, however, do not seem interested at this time in supporting their voice services over GbE.
As to whether or not there would be a space advantage by going with VoIP, it’s hard to say, and again the answer is not straightforward. Ultimately, if VoIP becomes pervasive, what we’ll see another form of displacement and substitution taking place (like I illustrated above, where fiber is replacing copper at the expense of increased power and air conditioning), where the space requirements don’t actually go away.
The need for space might actually increase, in sum, but become more dispersed over a distributed architecture. This could lead to the illusion that less space is required in Location A, where in reality additional space is required in Locations B, C, D through n.
While such would contribute to a more robust architecture, from a survivability perspective (assuming it matures and proves in, as advertised), it really doesn’t remove the need for similar levels of space, power and air. In fact, by decentralizing these functions, more logistics would be required as economies of scale are lost to a more-diffuse set of circumstances.
Recall, SIP based VoIP uses servers and gateways, and an increasing number of ‘presence-’ and other forms of servers, that are not necessarily collocated with distribution plant serving local subscribers. But the more salient point here is that ILECs are not substituting regular POTS provisions at this time with strictly VoIP wares. Instead, they are gradually incorporating IP capabilities in niches, such as long haul, and Internet modem access solutions.
The ILECS do not appear at this time to be moving very quickly towards replacing their Class 5 switches, however. Notably, some Class 5 subscriber ‘line cards’ are actually optioned for IP and DSL capabilities, which are multi-port modules serving individual users, but I’m not aware of any campaign to move forward with such offerings to regular residential or business subscribers, at this time, except for some mention of aggregating some dial tone DSL lines for so-hos and branch offices of larger firms.
--- end snip
FAC |