SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : MRV Communications (MRVC) opinions?
MRVC 9.975-0.1%Aug 15 5:00 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Jack Colton who wrote (13458)5/16/1999 6:26:00 PM
From: Frank A. Coluccio  Read Replies (1) of 42804
 
Jack, the 1,000 Terabit node prediction that was cited here
in yesterday's posts wasn't even laughable, it was borderline
pathetic. I use this descriptor in deference to the author's
honor.

Terabit networking facilitated by DWDN and other optical
trunking disciplines in various types of network elements, will
become commonplace even in Fortune 500s in a couple of
years, in addition to heavy doses of penetration in the
Internet backbone providers, ISPs and incumbent carriers.

Consumption trends, and the indicators associated with a
number of very large scale network re-engineering stimuli
which have only now been made possible due to these
advances in the optical space, already point to this level of
uptake, as well.

In some ways these are easy to see, and in some other ways
they are not so easy to see unless you go through entire
work place process flows, and demonstrate how, by
radically altering existing topologies, an organization can
effect very large cost savings while reducing its overall
number of assets.

Addressing one of your points, the "20" component in the
20-80 Pareto model I mentioned is an extremely large
spending pool to exploit, and shouldn't, in my opinion, be
viewed as a small field of opportunity. It may even be too
large of an expression to use at this time, but still large
in terms of potential spending.
------

One problem, a large one in fact, that network folks will have to
contend with when faced with making optical network equipment
procurments such as these in the near future will be psychological
and behavioral in nature, and in varying degrees, insidious. They
will need to break away from an inborn tendency that historically
has been their very cause for being, or their primary means of
validation. My attempt at explaining this follows.
-----

Historically, network managers have thought in terms of
always reducing the levels of bandwidth consumption to the
greatest extent possible, even if it meant excruciatingly long
design and development phases in order to conserve on
even the smallest quantities of bandwidth. It's not always
intentionally overdone in a cognitive sense, but it's the
reflexive equivalent which results from early, and ongoing,
pre-fiber model emphasis in the trade.

Nothing wrong there, and in most WAN scenarios it will
continue to be the prudent thing to do when using the
traditional carrier model. The only thing, in fact. The thing to
do, however, is to extricate the firm from that model
whenever and wherever possible, the earlier the better, so
long as the alternative passes muster for reliability, cost, and
fiber constituency.

I'll get back to this in a moment, but first I'd like to briefly
mention another similar factor that affects overall network
architecting, this time the EIA/TIA Commercial Building
Wiring Standards specifications.

While premises cabling may seem far-removed from the topic
at hand, it actually is not. It can be used very nicely to
demonstrate, by example, in common understandable terms
what is taking place in a relatively straightforward kind of
environment for extension in both the lateral considerations
of the campus, and the vertical issues concerning the WAN.
-----

The EIA/TIA 560 and 600 Series of structured cabling
system standards were created in the late Eighties to early
Nineties in order to provide the industry with a uniform
method of designing premises distribution systems, and to
give equipment makers the necessary design bogies to go
after in their Silicon wares: line driving, sensitivity, noise
immunity, EMI/RFI, etc., the parameters and performance
areas that would be directly affected by the copper and fiber
channels to which they would ultimately connect.

What has happened, however, due to the 90 meter distance
limitation which these standards stipulate for copper
cabling, between the closet cross-connect and the desk
telecommunications outlet, has resulted in a legacy wiring
approach which continues to this day, even after radically
improved approaches have been made available by an entire
list of directly- and indirectly- related dynamics for the
fiber cabling alternative.

This continuance of what was once an extremely brilliant
approach now adversely impacts on cost and othr facilities
optimizations of not only LAN networking (which I'll get to
in a moment), but it also influences base building design
decisions, hence costs again, including those that relate to
power, UPS, HVAC, fire detection and retardation, and
large amounts of floor space on every story, sometimes
multiplied by as many as six times (closets) per floor.

So, what does this have to do with optical? Everything. Or a
whole lot, depending on your perspective.

Consider a large office building with floors as large as
50,000 sq. ft. or greater. Say, there are forty stories in this
building, and there are five equipment closets on each floor.
That's no fewer than 200 individual closet areas that must be
environmentally conditioned with the provisions I mentioned
above, even before they could become wired and hubbed.

Why so many closets? Because of the distance limitations
that are spelled out in the specification for copper. In a
smaller aggregate sized location, this wouldn't amount to
much breakage, in fact, it would actually be the optimum
way to go. But in very large structures, it presents a form of
artificial barrier to distances between the desk and the
closet, again, imposed by the TIA standard I referenced
above.

Using copper instead of fiber, because it is cheaper than
glass at the individual cable pull level (and, to be fair, also at
the NIC and Hub electronics levels as well), there is no
other way but to use four to six closets per floor for a floor
size such as the one we're using as an example here. If you
take this to be true, then it could be said that this standard
no longer scales very well. It results in five hubs or switches,
or groups of same, on every floor that must be pointed
toward a router or switch located somewhere else.

That's a minimum of 200 relatively low density concentrators
or hubs outside the main equipment room or data center.
The data center, in turn, is centrally located elsewhere on
one of the floors, usually in what's called a server farm(s).

In contrast, if fiber were used, only two equipment rooms on
two different floors out of the forty (or one, instead, if you
wanted to forego a backup... let's stay with two) would
require active electronics of this nature, with desk top
connections being satisfied through passive optical cross
connects which would be run directly between the (far
fewer-) higher density hubs/switches in the equipment rooms
and the desks on all the floors. Furthermore, these rooms
could be the very same ones that were used for the server
farms.

I think you can begin to extrapolate from this that there will
be far fewer, yet extremely higher-bandwidth-carrying,
streams to contend with using the fiber alternative for
premises distribution, than the copper one. You can also
begin to envision what equipment types are obviated (and
we've still not left the building location yet) in the process.
Add these to the costs of the environmental and real estate
requirements for the 200 closets we've just vacated...

One must ask then, why is this the case? Why do network
planners continue along this path, and why do
knowledgeable facilities managers in charge of real estate
continue to welcome this model with open arms?

I can think of several reasons, but they are all pointed back
to one of the most common attributes of all, and that is the
need we have for a sense of security. I'll talk about the
comms guy only, the facilities groups have their own need to
hold on to the past.

Vendors play on these dynamics spontaneously. for, to be
certain, they possess the necessary smarts to see the
obvious advantages to the second option, too.

But they are not about to begin a crusade that would result
in the detriment of their own lot. Why cannibalize a good
thing, when you have the customer so ready and willing to
eat directly out of your hand, and who seems to never get
enough of it?

These behavioral habits are stubborn, to be sure, but they
are not impossible to break. All one needs do is see it
coming from another direction as a potential threat from
someone else who is prepared to break the deadlock, and
then the floodgates begin to open slowly at first, then wide,
and everyone is then prepared to jump in. How many times
have you seen this in your career? Which brings me back to
the network manager.

The transition from thinking in terms of wide area bandwidth
deprivation to one that takes advantage of the new
economies of fiber can be made, but in many cases not
without great discomfort to veteran practitioners, whose
point of reference have been molded by a lifetime of
something else, and who would be predisposed to feelings
similar to a free fall... or worse, guilt.

And these are the very same resident gurus who senior
management has hired, and who have taken the firm to new
heights in telecommunications. Who's going to argue with
them when they say it ain't time yet?

The very means by which they are evaluated and measured,
in fact, is at risk (then, everyone is uncomfortable), because
the general regards and the metrics that are in place to gauge
them are dependent on so many of the same constructs.
Besides, upper management is nervous enough with so much
happening right now in technology, that why rock to boat
when something is already working and it ain't broke. More
barriers to optical uptake.

And so it is, too, with the handling of new measures of
goods. In this instance, we're talking about dealing in terms
of counting and managing wavelengths, routing them,
switching them, in addition to, or as opposed to, all of the
extant LAN feeds and those T3s, T1s and DS0s.

It is simply not an intuitive thing to negotiate in one sitting, for
most. Thus, another barrier to uptake: Fear of the unknown.
---------

Returning to the corporate enterprise, and the increasing
demands for bandwidth, I've been to this rodeo a few times,
and I've yet to see anyone tame this beast and walk away
whole. At least, not when they're still standing and able to
talk about it.

We were talking about DWDM enabled network elements,
correct? Where private enterprises are concerned,
the first large scale effects of these network elements, mostly
based on DWDM at this point, will be in such roles as
bandwidth aggregators and resource managers between
clusters of heavy traffic activity, is my belief, and only
marginally to the Internet or other WAN segments.

I view this as taking place much the same way as switches
up to this point in time have been used to create collapsed
in-house (building, campus, MAN) backbones.

It's the next step into the future for large multi-location
enterprises, taking into account the existing bandwidth
trajectory that we now see on the screens, and the
accompanying enablers.

I don't necessarily believe that end-user organizations will
have access to optically defined WAN bandwidth capacity
in large denominations soon, which would alone justify
purchasing, although wavelength snippets will be appearing
on the WAN from some of the emerging fiber companies in
the near term for those who can afford it. We're already
seeing wavelengths leased, as opposed to entire strands, and
in some cases they are taking on the same kind of flavor as
IRU'ed strands to some of the largest end users.

We all know what this portends for the longer term, and
how that works. Costs to the carriers will come down with
improvements and quantity of elements sold; narrower
channel spacings will allow more wavelengths; fractional
wavelengths will then become available, etc.; and eventually
reach commodity level pricing, just like everything else.
Wavelength routing, and then virtual wavelength routing, and
then with policy management, will become the new
frontiers.
-------

Back to the present... the Fortune 100s, who make up only
a small part of my 20 component, btw, first got their feet
wet in routing and bandwidth aggregation, lest we lose sight
of this, with Ciscos and Wellfleet routers situated in their
riser closets in this manner.

At that earlier stage, to add perspective, wide area 56 kb/s
lines and T1s themselves were within reach of only the
largest firms.

Reading back, I should have more accurately stated that
geographically local traffic will be a big capacity component,
and most probably the largest component at first, where
privates are concerned, on initial deployments.

The latter local traffic will be in addition to what we
perceive would be less, but increasing over time, amounts of
access to optical WAN bandwidth in various forms -
sooner than most thought likely, until now.

The biggest threat to new forms of optical access happening
over the short term would be a sudden and universal shift on
the parts of the fiber carriers, and contractors, who could
elect to start squeezing optical bandwidth supply. When they
catch up to speed with the new technology we're discussing,
they may elect to stifle optical footprint delivery, in favor of
Sanitizing it first, themselves.

Strategy: Avoid cannibalization by continued use of
preexisting infrastructure packaging arrangements:

SONET, in this context, could be used very nicely for this
purpose, in order to ensure the continued maximization of
carrier revenues, as it always has, leveraging off the old
packaging of a legacy service. Thus, enjoying continued
dollar flows from an embedded infrastructure as they have in
the past.

How likely is this to occur, though, with worries already
circulating in near hysterical proportions, if you are to believe
some pundits, about a potential fiber glut? But it
could happen, nonetheless, but probably induced by
factors that would almost have to be, by definition, of
greater economic consequences in general.
------

Equilibrium will more than likely involve a certain letting of
individual wavelengths and their subdivisions in this process,
however, in addition to outright IRU sales of whole strands,
IMO. But again, only for the 20 or lesser category
component I alluded to, at first. By that point in time, lambda
subdivisions (wavelets?) will be ported in some manner
similar to frames or cells, such as those in today's frame
relay and ATM, respectively, if not like packets in the IP
model.
------

Incidentally, I have somewhere locked up in my garage, or it
could even be in some vault somewhere, a copy of a similar
note which I presented to one of my clients in 1987. It was
regarding the use of private fiber optic routes in the MAN.
At the time, I was politely advised that we were then using
T1s, which was all of the bandwidth one would ever need.
I've gotten a quite a bit of reuse from this anecdote, recently,
it seems.

Regards, Frank Coluccio
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext