SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : George Gilder - Forbes ASAP -- Ignore unavailable to you. Want to Upgrade?


To: WEDIII who wrote (1855)7/24/1999 8:29:00 PM
From: Kenneth E. De Paul  Respond to of 5853
 
The reason I think he is right is that as the network changes to his model two very important events should occur. One is the added intelligent applications which can only be achieved with his model. The other is that this new model will drive out the other. For example, manual switching was driven out by electronic switching because of volumes, labor costs, resultant new services not achievable before... yada, yada, yada. As the new architecture takes hold, fewer people will remain on the PSTN, thus the costs of maintaining the prior architecture will increase, the service revenues for new applications will devalue the older equipment, and labor costs will change. Hopefully more people will move to the new paradigm increasing the availability (hopefully!). Just a very simple opinion.



To: WEDIII who wrote (1855)7/24/1999 9:57:00 PM
From: Srinivasan Balasubramanian  Read Replies (1) | Respond to of 5853
 
To George Gilder

I have to admit that I heard about your reputation and technology
newsletter after TERN gapped up suddenly 10 pts up sometime back
and most recently, NOPT. I was contemplating of getting into
Com21 and Terayon but decided to go with Com21. I have a few
questions for you:

1. If S-CDMA is superior in every aspect, why TERN is still
standing in isolation as more players enter the fray? Why
the new players are reluctant to go with Terayon chips
instead of Broadcom chips? Will TERN get marginalized amidst
bigger players?

2. there is a talk that DOCSIS 1.2 (i believe TERN is co-authoring)
may take a long time to materialize. why TERN is still to
submit a modem for DOCSIS approval?

3. the recent quarter results of Terayon and Com21 suggests
Com21 has a superior balance sheet and the revenue patterns
indicate TERN is one or two quarters behind CMTO in the growth
curve and nothing spectacular there. also since Com21 is
trying for DOCSIS approval with BRCM 3300 chip, it may
have an advantage over others for 1.1.

4. your views on Com21's Return Path Multiplexer that provides
4:1 or 8:1 multiplexing.

5. How important is Voice over cable that it can alter the
competitive landscape?

It is not that I am trying to justify my Com21 investment, but
I have been perplexed by the disparity between the market
valuations of these two very similar companies and to my
inexperienced and ignorant mind, the only missing link is
your positive views on TERN.

Thanks
Srini




To: WEDIII who wrote (1855)7/25/1999 1:24:00 AM
From: Frank A. Coluccio  Read Replies (2) | Respond to of 5853
 
Thread, this is a rather long and sometimes tedious reply if you are
not interested in matters having to do with the semantics of network
architecture.

This is Part 1 to a two-part reply.
----------

WEBIII,

I would not presume to speak on matters of the "Telecosm" as George
Gilder can and hopefully will. Allow me instead to reply with some general
observations, apart from those of the Telecosm specifically, while you await
a more qualified reply in that regard.
-----------

Your message directly touches upon a subject that I've been laboring with
for quite some time. Most of this post is in direct response to the following
rather astute observations and questions you wrote in the uplink post:

"...at one extreme we have a model where all of the intelligence is built into
the network, and the peripheral devices consist of little more than a tin can
and a string, and at the other end we have the Gilder model, where the
network is dumb as a stone, and all the intelligence has migrated out to the
most extreme periphery... Might the most efficient system be some hybrid
of the two models?"

I would agree with your observation that in fact two different models exist
right now, and probably will continue into the future, but in all likelihood not
for the same reasons that George and perhaps others would agree, although
I don't know that for sure. Maybe George will comment on this later.

We call the two different models you referred to, in the more popular sense
when this level of taxonomy comes into play, as Reference
Models
, or RMs.

In this post I'll refer to the ISO/OSI Reference Model, or the OSI-RM.
since it is the OSI RM that most closely governs how telecommunications
systems are designed and viewed.

[[As a note to the uninitiated, the ISO/OSI-RM stands for the Organization
for International Standards (ISO) / Open Systems Interconnection (OSI)
Reference Model (ISO/OSI-RM), or simply, the OSI-RM.

The 'real' OSI-RM consists of a single stack consisting of multiple layers,
Layers 1 through 7. Each layer is said to have sublayers and exists in
various "planes." Each of the upper-lower layer junctions take place through
a process known as the "convergence."

The OSI-RM starts with a physical element at Layer 1 (e.g., wire, fiber,
coax, free space, etc.), and works its way upward to the application layer,
which is at L-7.]]

An RM will define the relationships of processes within a communications
system, while stipulating the relationships of each process to one another.
It's an architectural framework made up of a number of points of
reference, in effect.

An RM can be uniquely viewed depending on the context of a given
individual's station in the universe. Taken a step further, I can see how it
can be regarded as being keyed to a given perspective, or vantage point,
such as that of an end user, or that of a service provider. In any event, the
functions of a RM seem to be able to exist in different (sometimes
conflicting) contexts, when viewed within different planes or dimensions.

Reference models are useful in their utility as mechanisms which allow us
to cope with highly complex models by providing an almost palpable level of
substance with which to work, even if they are largely comprised by only
mental imagery and concepts. Eventually, those concepts get transformed
into tangible devices, since the RM contains the functional relationship
principles by which those devices are designed and manufactured, and
ultimately implemented in live systems.

An RM allows both a starting and finishing point which together allow us to
rationalize how to put complex systems together, and analyze how they
function together.

The network transport and routing related layers within this stack comprise
roughly Layers 2 through Layer 4 (effectively, between 2 and 4), with
increasing degrees of fuzziness below and above Layers 2 and 4,
respectively.

ATM and Frame Relay are said to take place at Layer 2, and IP routing is
said to take place at Layer 3. The fiber-optic layer of communications at
the line code level (where light is launched onto the strand) is viewed as
Layer 1.

If you are an end user, then you normally view the carrier's optical
transport services as a Layer 1 set of functions. Normally, that is.

But if you are a carrier or other form of service provider (SP), then things
change with respect to your view of the universe, because SPs do not
maintain the same perspectives as end users. [ ...yes, I know.]

From their perspective, and depending on the technologies they are using,
SPs too will have various layers of the stack in play at any single point in
time, even though you are viewing their overall presence as a Layer 1
physical network element. This can get tricky, as I hope to first describe
and then delineate, below.

The issues you've raised immediately called to my mind not only some of
the precepts suggested in Gilder's Telecosm, but also the principles that
were enumerated more recently in David Isenberg's "stupid network"
writings at: isen.com

Isenberg talks about the shifts which are taking place right now as we
slowly depart from the public switched telephone network's (PSTN's)
highly centralized intelligence model, to one where the intelligence resides at
the network's edge and end points. More recently, there was another
similar work which I posted here about a month ago ("Netheads versus
Bellheads") which was written for the Canadian Government by
T.M.Denton, François Ménard and again, David Isenberg. If you care to
read up on this work, it can be accessed at:

tmdenton.com

-----------

In my opinion, your reference to "two [reference] models" aligns very
nicely with the realities which are now appearing in the market place as the
SPs begin deployment of their next generation of optical/photonic networks
[SONET/SDH being the first ones, arguably], and as demonstrated by a
number of recently published service offerings and product releases by their
respective vendors.

Whereas the traditional view has been that fiber-optic based media was
comprised of a purely passive characteristic (i.e., dumbness) at the
OSI-RM Layer 1, it has become clearer that an entirely new model of
photonic activity is emerging with its own, self-contained multi-layer
architectures, which are suitable for frame working themselves.

Previously (and admittedly, still), these "multiple-layers" of activity within
the emerging optical networking spaces were and still are considered
merely sublayers of the original OSI-RM's Layer 1.

But more recently, these sublayer network dynamics which we are still
calling sublayers have, in my opinion, outgrown such restrictive and
subordinated characterizations as being sublayers of the OSI-RM's Layer 1,
and have caused me to re-focus on some of their implications, since they
begin to create separate models unto themselves (as the remainder of your
post also suggested).

This second order of RM activity at the SP level has an effect not unlike
that of a 30 frames per second (fps) series of still images being
sequentially flashed onto a screen, which creates an illusion that a seamless
moving image is present. To the end user this means that the SPs fiber
based network is transparent, allowing for near real time delivery of end
user content, despite the fact that their information may be taking an
unknown number of routes within a three second period in time supported by
lightning fast flow management, routing and switching in the SP's photonic
"sublayers."

So too, and in this manner, does the second order of SP RM activity
represent a certain level of spoofing to the end user and their applications,
in other words, as I hope to explain in a moment.

While such a view as I am suggesting here will not REALLY reveal
anything new (i.e., these same observations could be equally stated about
the SONET/SDH or earlier T1, T3 deployments of the past), I think that it
is necessary to expand on the optical paradigm a bit, in order to dispel any
growing notions that the optical layer will be immune to such a rules-based
framework as those of the past were, or that it will result in anything
different, when the service providers and their vendors are done sculpting
it. Save, of course, for its ability to scale to much greater levels of
throughput and payload deliveries.

In their purest forms, silica fibers contain no intelligence, while they are at
the same time viewed as having maximum (some say near infinite) capacity
for information carrying, limited only by I/Os. These qualities make them
ideal as "the" passive medium of choice, which Gilder has fashioned into a
form of mental imagery aid known as "the fiber sphere."

"Fiber sphere" is a term which [I believe] George coined close to a decade
ago, and he has made allusions to it many times since then both directly and
indirectly. Such a concept of having a transparent medium aligns very
nicely, although, somewhat mysteriously, with the OSI-RM's Layer 1, and
the dumb core concepts which were later spelled out by Eisenberg and
others.

I don't dispute the underlying theoretical soundness or the principles of the
fiber sphere. Indeed, I have on many occasions applauded George for
conceiving and writing about it in the manner in which he has. At the same
time I think it's more than simply noteworthy - I think that it is an imperative
- to discuss how the SPs have actually responded to the new potentials
afforded by optical developments, in ways which have become clear, by the
nature of the optical platforms that they and their vendors have chosen to
deploy and produce These lead me to conclude that attaining such a true
"fiber spherical" condition is a long way off, as George himself has also
pointed out today in response to a question regarding a cable modem
vendor's (TERN's) prospects.

Granted, we're still in an early stages as far as next gen optical goes, but
some directions have already been firmly established that will, like
everything else, leave a legacy of behavioral and investment realities which
will not only be difficult to get away from, but will self perpetuate well into
the future.

In order to scale these transparency qualities while continuing to maintain
some semblance of "any-to-any connectivity," however, we find that there
are economic realities [supported by the currently understood laws of
physics] which have already been broached and negotiated by vendors and
SPs, alike. They've concluded that in order to give the appearance of
achieving these ends [transparency and unbounded capacity, for the
moment] they can get away with it at 30 fps in an affordable manner by
introducing intelligence, sometimes vast amounts of it, within Layer 1 of
OSI-RM, itself.

Thus we introduce another stack of functional layers, or activities, at the
optical service provider stratum as yet another multi-layer model within the
original end user's perceived Layer 1 of the first OSI-RM order.

Where the first order (RM-1) addressed end user considerations and their
perspectives, the second order ( RM-2) addresses those of the service
providers, and their perspectives.

END PART 1. PART 2 Immediately Follows.



To: WEDIII who wrote (1855)7/25/1999 1:28:00 AM
From: Frank A. Coluccio  Read Replies (2) | Respond to of 5853
 
Part 2 of 2 in reply to: WEBIII

------------

Where the first order (RM-1) addressed end user considerations and their
perspectives, the second order ( RM-2) addresses those of the service
providers, and their perspectives.

In other words, what we've been seeing introduced here is a second order
of reference modeling (RM-2) in the SP dimension, as opposed to that of
the original OSI-RM (RM-1) in the end user dimension.

We can regard these as being 'momentarily' distinct from one another for
several reasons. One reason is that users will continue to employ cisco like
routing schemes on top of the newly created optical transport schemes,
which represent, to them, anyway, Layer 1. What this means is that even
though the SPs have routing and path switching taking place constantly
inside of them at the SP RM-2 stratum, those optical routing and switching
functions are unbeknownst to the end user. They are operating at different
event recognition states, which goes to the reason why I say 'momentarily.'
And that is, again, due to the current 30 fps factor I alluded to above
which still represents an advantage, but one which will not last forever. Hence,
momentarily.

When users begin approaching the speeds of the SPs, there will be a
potential for conflict which will suddenly manifest itself, because at that
point the second order RM will no longer appear to be as fast (read:
transparent) as it once was. Worse, or more poignantly, at some point end
users end-point routing and path switching platforms of their own could
even begin to overpower those of the SPs, causing the need for yet more
channel capacity and switching speeds in the latter's platforms, as well.

Actually, this second stack of activities which I'm now calling RM 2 is more
correctly viewed as another set of "sublayer activities" within the REAL
Layer 1 of the original OSI-RM. And because this is the way which is
consistent with keeping the peace at the international standards setting
bodies levels, this is how it will be presented in the texts going forward, you
can rest assured.

Reference Model 1 = RM-1 = End User perspective
Reference Model 2 = RM-2 = Service Provider perspective

But at the point of confrontation (when user networking speeds and device
classifications catch up to those of the SPs), how does one continue to
successfully rationalize that the RM-2 model is really only a set of sublayer
functions of the original Reference Model's Layer 1? Consider that at that
point you will have two different networks whose network elements and
speeds are each identical to one another's?

When you think about some of the possibilities, they tongue-twist the mind:

If you remain staunchly committed to letter of the OSI-RM, then you can
actually have simultaneously occurring, on the one hand, normal IETF
protocol (Cisco-like) RIP or OSPF routing taking place at RM-1 Layer 3 in
an enterprise network, which is riding over a SP's optical network which
you perceive to be Layer 1 (still from the perspective of RM-1)... while at
the same time the SP is routing massive flows between its edge locations in
an MPLS-like manner at its own Layers 2 and 3 (now taking place in the
SP's RM-2) in order to take advantage of least cost (or on an instantaneous
basis, least congested) routes, in a manner which is transparent to the user
who is still routing at Layer 3, in RM-1.
-------

The industry (service providers, suppliers, vendors, regulators, etc.) has
decided to take a more pragmatic approach than to install physical fibers
directly between all possible endpoint-pair combinations. That is, we cannot
expect to see fiber being used in larger venues such as wide area networks
(WANs) or Metro Area Networks (MANs) etc., with any sort of
dedicatedness at the physical layer.

Instead, end users will be connected via fiber (i.e., hopefully, at some point,
but until then by other means) from their locations to the access platforms
and then to the edge of larger shared optical clouds, or photonic clouds,
which will instead serve as a form of transparent medium, for all intents and
purposes.

In effect, they have added business principles to the calculus of optical
networking for many reasons, and while some of these reasons may seem
redundant with those of the past, some are actually proving to be new ones,
since they are unique to some of the newer enabling qualities of the optical
paradigm.

The principal players in this sector have effectively spawned the
emergence of photonic "clouds" which are very similar in networking
principles to those of ATM, Frame Relay, X.25, and TCP/IP clouds of the
current and past. Only, these emerging photonic variants are using some
elemental constructs which are vastly different in nature, and which are
keyed to parameters which are obviously defined in optics, instead of those
which were purely electronic.

These ripple out to the types and sizes of protocols and payloads that can
be sent, as well, which tends to give the appearance that they are
separating, or by some means departing, from the more restrictive
rules-based protocols of the past. Under closer examination, however, we
often find that this isn't really happening at all.

Instead, we are seeing that the same things are now being done in the
optical clouds as has always been done before, only at some order of
magnitude faster, and several orders of magnitude more cleanly, and more
transparently from a materials and type-of-equipment point of view. In the
process, other forms or equipment are reaching premature (from the
standpoints of established norms) obsolescence.

The primary differences between today's asynchronous transfer mode and
TCP/IP cloud models, and those which will be "purely" optically defined,
will be those of scale.. scales of both absolute payload sizing and speed.

Stated another way, bursts of photonically supported payloads will be vastly
more abundant in information [potentially] than those of their electronic
ancestors, but at the same time they will follow similar, if not identical,
principles of network resource sharing based on the calculus of statistical
arbitration, route mapping and path switching.

All the while, this optical core must maintain a relative persona of being
truly both transparent and dumb from a user perspective, which will be
achieved rather easily in the early going because it will be possible, initially,
to spoof end user applications. And the reasons for this will narrow down to
the fact that end users applications will initially be much slower than those
of the core network SPs, which means that they will still lag considerably
behind the necessary speed required to detect (or be affected by) what is
actually taking place at the cloud level, or network core.

The dynamic I'm referring to is akin to the experience of the human eye
and the spoofing which takes place at the 30 frames per second. You can't
tell that you are only looking at still shots at 30 fps. So, too, will the photonic
cloud appear dumb and entirely transparent, but in reality it will be quite
intelligent, and almost always, opaque.

The true telling of the nature of these disparities and similarities in the
cadence relationships between SPs and end users will be when end user
applications are able to keep pace, or surpass, the speeds of those taking
place within the SP optical clouds.

Since end users will be among those who will be using next generation
terabit routing engines, I fully expect that this realization will not be far off
into the future, but could manifest sometime during the next two years, if not
sooner, relegating the true meaning of transparency to something still into
the future. Or worse, indefinable or unattainable, at all. It's all relative.

What are all of these legacy things that will be taking place within these
optical clouds which I've referenced here?

The answer to this is: The same things that have been taking place in other
SP multiplexed venues since the late Nineteenth Century.

They will consist of

- time division multiplexing (optical add-drop multiplexers and optical frame
chopping and forwarding, taking advantage of time compression),

- frequency division multiplexing (WDM/DWDM, identical in many ways to
cable TV systems and the ways in which older telegraph 'systems'
aggregated large numbers of individual teletype channels in the past),

- carrier modulation schemes (AM, FM, even CDMA, eventually, when the
vendors contrive ways of squeezing additional bits in a more pluralistic
settings for various proprietary implementations),

- discrete and bundled payload routing (IETF-like MPLS, or multiprotocol
label switching variations, which Monterey has already hybridized and now
now calling Wavelength Routing Protocol or WaRP),

- network management which will be closely tied to RM-2 Layers 2 and 3
routing functions, and surveillance and feedback capabilities which will be
almost meteorological in nature to monitor and manage massively parallel -
and intersecting - flows, just like today's Sonetized digital cross connects
and ATM switches do in a much slower way,

- and much more relating to accounting, QoS, policy management, etc.

At most of these processing junctures, discontinuities will exist which
render each of them blinded, or opaque-ized, from the previous where
purely optical passage is concerned.

In the very largest of networks, only the most uneconomical of them will
permit purely optical signals in their optical origination form from entering
one end of the channel and traversing unmolested to the other end, without
first electronically encoding the content into a digital form (or some other
form of coded representation) which would then be decoded at the remote
user site.

It s because of the foregoing statement that I feel that sufficient built in
resistance exists in the prevailing delivery structure to prevent the Fiber
Sphere from emerging, much less being allowed to flourish, anytime soon or
in the foreseeable future.

While I do believe that striving for this purest form of optical serves to
offset the opposition on some level, thereby making more than simply a
noble idea I also feel that the economics of achieving it on a wide scale
basis is still very elusive. Vendors are already deploying all of the
techniques I mentioned in the above list techniques (three paragraphs
above), in their Terabit products today, as we type. And all the while the
clouds they support will look, for all intents and purposes, to be transparent.

They will not, however, look transparent purely because they are in the
truest sense of the word transparent, but rather, in reality they will appear
that way because the photonic clouds will initially be many times (much
greater than 30 fps) faster than the applications which ride over them.

Therefore, in answer to your original question, yes, we can at this time and
for the next couple of years at least, regard what is happening in the
foregoing contexts as two separate models, or RMs, or "dimensions" as I
prefer to look at them. The one that fits you the best is the one that is
defined by who you are, and what you do. Are you an end user, or a SP?

I can regard the first as RM-1, and the second as RM-2, for all intents and
purposes. That is, until they eventually collide with one another. Then
what? Comments welcome.

Regards, Frank Coluccio



To: WEDIII who wrote (1855)7/26/1999 12:26:00 AM
From: George Gilder  Read Replies (3) | Respond to of 5853
 
Thanks for all these intriguing and incisive reflections, particularly Frank Coluccio's heroic posts. I would only point out here and now--all the issues raised are too extensive for me to address at the end of the month as I rush toward a deadline and a week of speeches--that I support a component software model. This paradigm is based chiefly on Java and implies considerable distribution of intelligence in smart servers et al. But I place these on the edge of the net. There are also broadcast and select models, chiefly from satellites, incorporated into the telecosm and much intelligence will be needed for search and sort functions. "Dumb as a stone with all intelligence on the edge," does not solve the problem of locating the edges as the technology evolves into the troposphere, under the seas, through fibersphere and ethersphere, and up and down through the seven layered models and myths in our minds.