Part 2 of 2 in reply to: WEBIII
------------
Where the first order (RM-1) addressed end user considerations and their perspectives, the second order ( RM-2) addresses those of the service providers, and their perspectives.
In other words, what we've been seeing introduced here is a second order of reference modeling (RM-2) in the SP dimension, as opposed to that of the original OSI-RM (RM-1) in the end user dimension.
We can regard these as being 'momentarily' distinct from one another for several reasons. One reason is that users will continue to employ cisco like routing schemes on top of the newly created optical transport schemes, which represent, to them, anyway, Layer 1. What this means is that even though the SPs have routing and path switching taking place constantly inside of them at the SP RM-2 stratum, those optical routing and switching functions are unbeknownst to the end user. They are operating at different event recognition states, which goes to the reason why I say 'momentarily.' And that is, again, due to the current 30 fps factor I alluded to above which still represents an advantage, but one which will not last forever. Hence, momentarily.
When users begin approaching the speeds of the SPs, there will be a potential for conflict which will suddenly manifest itself, because at that point the second order RM will no longer appear to be as fast (read: transparent) as it once was. Worse, or more poignantly, at some point end users end-point routing and path switching platforms of their own could even begin to overpower those of the SPs, causing the need for yet more channel capacity and switching speeds in the latter's platforms, as well.
Actually, this second stack of activities which I'm now calling RM 2 is more correctly viewed as another set of "sublayer activities" within the REAL Layer 1 of the original OSI-RM. And because this is the way which is consistent with keeping the peace at the international standards setting bodies levels, this is how it will be presented in the texts going forward, you can rest assured.
Reference Model 1 = RM-1 = End User perspective Reference Model 2 = RM-2 = Service Provider perspective
But at the point of confrontation (when user networking speeds and device classifications catch up to those of the SPs), how does one continue to successfully rationalize that the RM-2 model is really only a set of sublayer functions of the original Reference Model's Layer 1? Consider that at that point you will have two different networks whose network elements and speeds are each identical to one another's?
When you think about some of the possibilities, they tongue-twist the mind:
If you remain staunchly committed to letter of the OSI-RM, then you can actually have simultaneously occurring, on the one hand, normal IETF protocol (Cisco-like) RIP or OSPF routing taking place at RM-1 Layer 3 in an enterprise network, which is riding over a SP's optical network which you perceive to be Layer 1 (still from the perspective of RM-1)... while at the same time the SP is routing massive flows between its edge locations in an MPLS-like manner at its own Layers 2 and 3 (now taking place in the SP's RM-2) in order to take advantage of least cost (or on an instantaneous basis, least congested) routes, in a manner which is transparent to the user who is still routing at Layer 3, in RM-1. -------
The industry (service providers, suppliers, vendors, regulators, etc.) has decided to take a more pragmatic approach than to install physical fibers directly between all possible endpoint-pair combinations. That is, we cannot expect to see fiber being used in larger venues such as wide area networks (WANs) or Metro Area Networks (MANs) etc., with any sort of dedicatedness at the physical layer.
Instead, end users will be connected via fiber (i.e., hopefully, at some point, but until then by other means) from their locations to the access platforms and then to the edge of larger shared optical clouds, or photonic clouds, which will instead serve as a form of transparent medium, for all intents and purposes.
In effect, they have added business principles to the calculus of optical networking for many reasons, and while some of these reasons may seem redundant with those of the past, some are actually proving to be new ones, since they are unique to some of the newer enabling qualities of the optical paradigm.
The principal players in this sector have effectively spawned the emergence of photonic "clouds" which are very similar in networking principles to those of ATM, Frame Relay, X.25, and TCP/IP clouds of the current and past. Only, these emerging photonic variants are using some elemental constructs which are vastly different in nature, and which are keyed to parameters which are obviously defined in optics, instead of those which were purely electronic.
These ripple out to the types and sizes of protocols and payloads that can be sent, as well, which tends to give the appearance that they are separating, or by some means departing, from the more restrictive rules-based protocols of the past. Under closer examination, however, we often find that this isn't really happening at all.
Instead, we are seeing that the same things are now being done in the optical clouds as has always been done before, only at some order of magnitude faster, and several orders of magnitude more cleanly, and more transparently from a materials and type-of-equipment point of view. In the process, other forms or equipment are reaching premature (from the standpoints of established norms) obsolescence.
The primary differences between today's asynchronous transfer mode and TCP/IP cloud models, and those which will be "purely" optically defined, will be those of scale.. scales of both absolute payload sizing and speed.
Stated another way, bursts of photonically supported payloads will be vastly more abundant in information [potentially] than those of their electronic ancestors, but at the same time they will follow similar, if not identical, principles of network resource sharing based on the calculus of statistical arbitration, route mapping and path switching.
All the while, this optical core must maintain a relative persona of being truly both transparent and dumb from a user perspective, which will be achieved rather easily in the early going because it will be possible, initially, to spoof end user applications. And the reasons for this will narrow down to the fact that end users applications will initially be much slower than those of the core network SPs, which means that they will still lag considerably behind the necessary speed required to detect (or be affected by) what is actually taking place at the cloud level, or network core.
The dynamic I'm referring to is akin to the experience of the human eye and the spoofing which takes place at the 30 frames per second. You can't tell that you are only looking at still shots at 30 fps. So, too, will the photonic cloud appear dumb and entirely transparent, but in reality it will be quite intelligent, and almost always, opaque.
The true telling of the nature of these disparities and similarities in the cadence relationships between SPs and end users will be when end user applications are able to keep pace, or surpass, the speeds of those taking place within the SP optical clouds.
Since end users will be among those who will be using next generation terabit routing engines, I fully expect that this realization will not be far off into the future, but could manifest sometime during the next two years, if not sooner, relegating the true meaning of transparency to something still into the future. Or worse, indefinable or unattainable, at all. It's all relative.
What are all of these legacy things that will be taking place within these optical clouds which I've referenced here?
The answer to this is: The same things that have been taking place in other SP multiplexed venues since the late Nineteenth Century.
They will consist of
- time division multiplexing (optical add-drop multiplexers and optical frame chopping and forwarding, taking advantage of time compression),
- frequency division multiplexing (WDM/DWDM, identical in many ways to cable TV systems and the ways in which older telegraph 'systems' aggregated large numbers of individual teletype channels in the past),
- carrier modulation schemes (AM, FM, even CDMA, eventually, when the vendors contrive ways of squeezing additional bits in a more pluralistic settings for various proprietary implementations),
- discrete and bundled payload routing (IETF-like MPLS, or multiprotocol label switching variations, which Monterey has already hybridized and now now calling Wavelength Routing Protocol or WaRP),
- network management which will be closely tied to RM-2 Layers 2 and 3 routing functions, and surveillance and feedback capabilities which will be almost meteorological in nature to monitor and manage massively parallel - and intersecting - flows, just like today's Sonetized digital cross connects and ATM switches do in a much slower way,
- and much more relating to accounting, QoS, policy management, etc.
At most of these processing junctures, discontinuities will exist which render each of them blinded, or opaque-ized, from the previous where purely optical passage is concerned.
In the very largest of networks, only the most uneconomical of them will permit purely optical signals in their optical origination form from entering one end of the channel and traversing unmolested to the other end, without first electronically encoding the content into a digital form (or some other form of coded representation) which would then be decoded at the remote user site.
It s because of the foregoing statement that I feel that sufficient built in resistance exists in the prevailing delivery structure to prevent the Fiber Sphere from emerging, much less being allowed to flourish, anytime soon or in the foreseeable future.
While I do believe that striving for this purest form of optical serves to offset the opposition on some level, thereby making more than simply a noble idea I also feel that the economics of achieving it on a wide scale basis is still very elusive. Vendors are already deploying all of the techniques I mentioned in the above list techniques (three paragraphs above), in their Terabit products today, as we type. And all the while the clouds they support will look, for all intents and purposes, to be transparent.
They will not, however, look transparent purely because they are in the truest sense of the word transparent, but rather, in reality they will appear that way because the photonic clouds will initially be many times (much greater than 30 fps) faster than the applications which ride over them.
Therefore, in answer to your original question, yes, we can at this time and for the next couple of years at least, regard what is happening in the foregoing contexts as two separate models, or RMs, or "dimensions" as I prefer to look at them. The one that fits you the best is the one that is defined by who you are, and what you do. Are you an end user, or a SP?
I can regard the first as RM-1, and the second as RM-2, for all intents and purposes. That is, until they eventually collide with one another. Then what? Comments welcome.
Regards, Frank Coluccio |