SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : LAST MILE TECHNOLOGIES - Let's Discuss Them Here -- Ignore unavailable to you. Want to Upgrade?


To: John Stichnoth who wrote (3178)3/19/1999 10:01:00 AM
From: TheSlowLane  Read Replies (1) | Respond to of 12823
 
I would also like to hear more about this. I am seeing more and more about ATM, particularly in reference to converged networks. It seems to me from what I am reading that ATM can provide the quality of service to support truly converged applications (voice, data, video, etc.) while IP is still not quite there yet.



To: John Stichnoth who wrote (3178)3/20/1999 7:59:00 PM
From: Frank A. Coluccio  Read Replies (1) | Respond to of 12823
 
Hi John,

>>do you have any comments on the implications for the future of the two
pipelines in this "next generation" Internet being different protocols? Does
that imply anything for development of the commercial backbone?<<

Good question. One that I've thought about as well.

My short answer is no. Some of the underlying reasons for my answering
'no' can be extrapolated, hopefully, from the following:

========

The ATM traffic that was cited is currently being supported on the NFS
vBNS network, an extant network which is using ATM. I could only
surmise that ATM was chosen, possibly, due to the lack of anything else
being available at the time that could both operate at the OC-12c rate
(622 Mb/s) and at the same time support time-sensitive applications
in one (concatenated) flow.

Keep in mind that decisions like these usually precede implementation by
a year or more. I suspect that in addition to its carriage of IP, it is also
being used for isochronous (time-sensitive) services such as voice and
video conferencing, to support distance learning and other forms of v-c
applications, and possibly some bulk data (channel extension) services
emanating from main frame computers, as well. [These were installed
prior to the widespread commercialization of IP on the IBM main frame
channel.]

Abilene, in contrast, is purported to be being built as an all IP network,
although the term "all-IP" can often be very misleading. In any event,
when Abilene is melded with the NFS vBNS mesh, it will be very feasible
to reallocate the bandwidth of vBNS to an all IP structure (how's that for
an oxymoron?), if such a decision is reached.

Such a decision may not be reached, for several reasons. The one reason
that pops off the page immediately is that Abilene will very likely be using
some form of IP directly over fiber or lambda, whereas NFS is currently
derived from a larger SONET flow, in the form of an extraction of bytes
through add-drop multiplexing techniques. To marry these two forms of
transmission into a homogenous mix would be to introduce more kludge
than it would be worth, if it were even possible, which it is not.

Nor will it really make all that much of a difference in the early stages,
since vBNS is already supporting IP over ATM. Another reason why it
may not make any difference is because vBNS is going to be dwarfed by
its successor by a high margin.

Depending on how things play out, it might actually make a difference for
a short spell, if the IETF and its members could ever get it together by
formulating a way that everyone's MPLS, RSVP, etc., protocols talk to
one another's. If such a coming together did take place, however, then the
ATM architecture would present a problem because it would not
harmonize with the new protocols which would undoubtedly be
incorporated in new routers. I don't think that we have to worry about
that for a while, tho, unless a single vendor solution is adopted. In the
latter case, there would be no reason for harmonizing between vendor
lines, hence no reason for waiting.

Until that time, the presence of IP over ATM versus pure IP will only be
one more minor discontinuity supported by well-established patches and
work arounds. These distinctions, however, do not derive so much from
divine truth, as much as they do from who is in charge, and the particular
folk lore they were brought up in. I think that way has been shown here.
It's going to be IP.
====

Given the way you phrased your question (e.g., emphasis on "next
generation") it may be best to recall the distinctions between the Internet2
(I2), and that of the Next Generation Internet ( NGI). I2 is intended for
academia and national research labs, whereas NGI is intended for future
commercial technology development and deployments.

At some point the effects of these two undertakings will intersect or
merge, if they could indeed be kept apart from one another at a cognitive
level, for any length of time. In any event, at some point their fruits will be
indistinguishable, as they become subsumed by the greater Internet at
large, or perhaps even ignored entirely if the greater 'net advances
beyond them.

We've seen this phenomenon occur once before within the course of the past
ten years. It's difficult to keep commercial undertakings down, but it is
fairly easy to tie up government sanctioned undertakings which are
supported by grants, given the administrative issues and fairness doling
that takes place with regards to bid processes and awards, and the
ensuing red tape boondoggling that follows. Enufsedaboutdat.

My personal opinion is that the commercial entities who are involved with
these undertakings at the present time, under a form of quasi
governmental purview, are the same commercial entities who are already
surpassing the goals (at least the short- and intermediate- term ones) of I2
and NGI in their other, day-to-day commercial implementations and roll
outs.

This is a sign of the ongoing progress that is being in ultra-high throughput
transmission systems, that the near term objectives of these platforms
were aimed so low, and that they are already being surpassed through
normal evolutionary developments.

To earmark a 2.5 Gb/s backbone for Abilene - even though at OC-48, it
is four times the capacity of the NFS OC-12 - for example, is to merely
mimic what @Home is going to be doing during the same time frame.

I don't consider this to be as noteworthy today as I might have two or
three years ago, or even last year when the bandwidth donations were
being doled out by some of the fiber barons, QWST in particular.

So we will wind up with the image, at least, of two new super-Internets in
the near future which will boast capabilities beyond that which we have
today, i.e., today's Public Internet. The public Internet as we now know
it, in contrast, in the meantime will probably have grown on its own by
some order of magnitude larger and more functionality-rich than I2 and
NGI combined, by the time that they are built and settled in.

It is for these reasons that I feel that there is a lot more grandstanding and
bandwagoning going on here than genuine benefits that may be derived,
which tends to distract and obscure my vision in these regards. While I
think that some huge benefits will be derived from the tested aspects, I am
also aware that there is a lot of nonsense and waste going on in these
types of undertakings, at the same time.

In short, one cannot efficiently administer a federally involved project due
to the red tape that goes with the territory (not only at the federal level,
but where schools are involved, at the state and municipal levels, as well)

Which is not to suggest, however, that there are not many
individuals who benefit from these inefficiencies. Only
that commercially motivated undertakings are far more
efficient than those run by the state and federal
governments.

======

Getting back to your question, ATM (and other protocols to support
isochronous services like voice and video with the help of QoS hooks...
or even straight SONET and other subordinate forms of TDM for the
time being), is needed today because equivalent methods in the IP realm
do not adequately exist yet, in deployable form. Quality controls (QoS
features) are required here, especially where voice and distance learning
over video-conferencing links is concerned. SNA fits this mold, as well,
to a great extent, but IP on the mainframe is now beginning to make a
dent in that notion. Although, I don't know if this is a cost priority for
academic institutions. Given the number of IBM 4381 main frames that
were still in existence last year in public learning institutions, I'd say not.

My gut tells me that NGI (and I2 as well, to some degree) is really going
to be, where it is not already, an amalgamation of sub-networks
supporting disparate services rolled into one framework, characterized by
both switched and packet protocols. IP will occupy the greatest amount
of bandwidth, but I feel that some allocations will also be made for those
die-hard unroutable [or, difficult to route] situations that still exist. Some,
at least, of those services will not be Internet related, at all, until some time
in the future when they are transformed at the box level..
====

For a fairly robust description of the NGI program.

ccic.gov

For the I2 initiatives:

internet2.edu

====

As always, comments and corrections are welcome.

Regards, Frank_C.