SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : MRV Communications (MRVC) opinions?
MRVC 9.975-0.1%Aug 15 5:00 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Dan Spillane who wrote (404)10/14/1996 7:07:00 PM
From: Dan Ross   of 42804
 
To Dan and ALL:

I found this on the Amati thread. It's an article from Info. World and discusses the pros and cons of 10Base-T, ATM, and Gig - Ethernet.
Please wait,....

At this time, I would like to discuss what I feel will occur as a result of the Fibronics acquisition. MRVC now has the ability to offer a wider variety of products to different market segments. Remember, some applications want bang for the buck and would use Gig -ethernet. However, some applications will use 10Base-T and the next generation 100Base-T. These applications are supposed to have a lower fault rate than Gig-ethernet.

I heard from a very, very reliable source that 100Base-T is the next generation and will smoke ADSL. I wonder how involved in R&D Fibronics was in this area.

Anyways, as Dan S said earlier, management is shrewd and is positioning themselves quite nicely for the future. Could MRVC be the next CSCO??

Please respond

Dan Ross

Anyways, read at your pleasure. THIS IS MODERATELY TECHNICAL and SOME NON-TECHIES CAN LEARN SOMETHING ABOUT THE TECHNOLOGY BY READING THIS!!

Source: InfoWorld

InfoWorld via Individual Inc. : ** NOTE: TRUNCATED STORY **

COMPARED: - 155Mbps ATM solution - Fore Systems ForeRunner ASX-200BX -
Fore
Systems ForeRunner ES-3810 Ethernet Workgroup Switch - Adaptec ATM155 for
PCI
ANA-5940

Fast Ethernet solution

Network Peripherals NuSwitch FE-1200

Asante ReadySwitch 5104

Asante AsanteFast 10/100 PCI, PC Edition

FDDI solution

Network Peripherals FD-212F switch

Interphase 5511 PCI FDDI NIC

Network backbones have undergone a great deal of change in the past decade.
Previously called
on only to provide a fast, central conduit for data, backbones have evolved into one of
the most
intelligent, manageable, and important parts of the network. Limited-bandwidth
backbones,
such as 10Base- T, not only have a negative impact on productivity and system
growth, they
also hinder the implementation of new network applications. High-speed backbones,
on the
other hand, offer the prospect of fast, reliable, and highly scalable corporate networks.

Backbones serve two main purposes: They allow a lot of lower-speed clients to access
a small
number of high-speed servers, and they allow these high-speed servers to talk to each
other at a
decent rate. Beyond that, an ideal network backbone would provide comprehensive
manageability and robust fault tolerance. Manageability allows the backbone to be
configured to
behave optimally for each different installation; fault tolerance ensures that critical
servers
remain available even after a failure in part of the network. Unfortunately for IS, there
is no
single technology that meets all of these demands, so the decision about which network
backbone to implement depends on each company's priorities. Ask yourself: Do you
care most
about cost, speed, or scalability?

TRIED-AND-TRUE VS. THE NEW. We put three high-speed technologies in a
head- to-head
battle for the LAN backbone: FDDI, the tried-and-true contestant, against the
pretenders to the
backbone throne, Fast Ethernet (100Base-T) and 155Mbps Asynchronous Transfer
Mode
(ATM). Considering FDDI's large installed base in corporate networks, Fast Ethernet
and ATM
need compelling benefits to be real contenders.

FDDI, introduced in the mid-1980s, has become the high-speed network backbone of
choice
for corporate America. For years, it was the only standard for 100Mbps LAN
networking, and
its fault-tolerant and straightforward nature have contributed to its success. About half
of
Fortune 5000 companies have FDDI backbones installed, and it's been used on
everything from
desktop PCs to Cray supercomputers. Research company Gartner Group Inc. reports
that
FDDI now accounts for 46 percent of shipped 100Mbps ports and adapters.

Fast Ethernet came into this comparison with the advantage of being a familiar,
inexpensive
technology. You could say it's the Camaro Z28 of network topologies -- it's not as
refined as its
competitors, and it lacks a lot of the comforts that more expensive solutions provide.
Its sole
purpose is to go fast, cheaply. Fast Ethernet's inherent low cost, easy installation, and
familiarity
to IS all serve to recommend it.

NETWORKING'S FIVE-STAR DELICACY. ATM, on the other hand, is like a
five-star
restaurant where you can order any delicacy, no matter how exotic -- provided you
can
describe its exact molecular structure. This is a standard that's supposed to provide
data, voice,
and video simultaneously over the same infrastructure. In order to provide for all three,
ATM
uses a complex system of protocols and services that generally lacks a simple interface.
Unless
ATM vendors quickly clean up LAN emulation's incredibly convoluted configuration
process,
ATM horror stories will continue to scare people away.

In contrast to FDDI, neither of these two new technologies was developed specifically
for the
network backbone. Fast Ethernet is basically a desktop technology on steroids, and
ATM was
developed primarily by and for the telecommunications industry.

In keeping with our goal of establishing the best solution to our readers' business
problems, we
focused on the relative merits of the three technologies rather than the specific
implementations
offered by each vendor. For example, some vendors offer proprietary virtual LAN
solutions for
Fast Ethernet that approach the robustness of ATM. However, because these
solutions are
costly and nonstandard, we didn't look at them.

Perhaps not surprisingly, we encountered some vendor resistance to our
solutions-based
approach while preparing this comparison. Many vendors we spoke to do business in
all three
technologies, and their preferences became apparent when we requested one or
another of the
technologies for our tests. Ironically, some vendors declined to supply FDDI
equipment
specifically because they thought the solution would lose; other vendors are so invested
in Fast
Ethernet that they didn't want to be associated with another technology.

Some vendors acknowledged the validity of our comparison and accepted that a
judgment on a
particular technology is not necessarily an indictment of their product lines. For
example, the
problems we encountered with the ATM solution are representative of the current state
of
ATM technology and not a reflection on the quality of the particular vendor's products.

The ATM solution in this comparison came from Fore Systems Inc. and Adaptec Inc.
Fore
Systems made up the lion's share of this solution by supplying one ForeRunner
ASX-200BX
backbone switch and two ForeRunner ES-3810 Ethernet Workgroup Switches; we
used these
in conjunction with Adaptec's ATM155 for PCI ANA-5940 network interface cards
(NICs).

The Fast Ethernet solution was made up of a Network Peripherals Inc. NuSwitch
FE-1200
backbone switch, combined with two Asante Technologies Inc. ReadySwitch 5104
edge
switches and five Asante AsanteFast 10/100 PCI, PC Edition, NICs.

Products from two vendors also made up the FDDI solution: Network Peripherals and
Interphase Corp. We used two Network Peripherals FD-212F edge switches and five
Interphase 5511 PCI FDDI NICs.

In order to best represent a general backbone implementation, we set up a test
network that
mimicked a small business scenario, covering issues of distance, performance, and
administration. We used a total of five servers, one backbone switch, two edge
switches, and
56 clients. Each server was connected directly to the backbone switch, which was then
connected to each of the edge switches using the same backbone technology. The two
edge
switches each supported four 10Base-T segments. On each of those segments, we had
between
five and eight typical clients. We tested each technology in the categories of setup and
installation, speed (day and night benchmarks), manageability, troubleshooting and
error
recovery, and price.

FAULT TOLERANCE MAKES THE DIFFERENCE. These technologies differ
radically in
the way they move data across the network and each one has something to make it
worthy of a
recommendation. But in the end, FDDI emerged as a valiant winner despite its
generally lower
profile. In many ways, this comes as no surprise, because FDDI was originally
developed for
use in network backbones. Its strong fault-tolerant features, which were built in from
day one,
set this technology apart from its rivals.

Unfortunately, it looks as though this could be a bittersweet victory for FDDI. Even
though it
won the comparison, the lack of further development means that its days are probably
numbered. FDDI-II has been released, but it looks like it will not gain a strong enough
foothold
to survive. The next few years will bring offerings from both the Ethernet and ATM
camps that
will eclipse FDDI's performance, even if they can't match its fault tolerance. But for
now,
FDDI remains champion of the backbone.

Technology overview

ATM was fundamentally a child of telephone companies and not designed with the
LAN in
mind.With Asynchronous Transfer Mode (ATM), Fast Ethernet, and FDDI, today's
network
planner has three completely different ways to accomplish essentially the same thing:
getting
data from one computer to another -- fast. Where the technologies diverge, however,
is in the
method of transmission and the vision of just how much a network topology should do.

ATM

ATM represents a radical departure from the traditional LAN format. ATM's basic
unit is a cell
rather than a packet, and that cell has just 48 usable bytes (with an additional 5 for
header
information). This was partly necessary in order to provide for the very short latency
required
by multimedia. (In fact, the 48-byte structure was a compromise between U.S.
telephone
companies, which favored a 64-byte payload, and their European counterparts, which
wanted
32 bytes in each cell.)

ATM devices talk to each other over virtual circuits that can either be permanent or
temporary.
A permanent virtual circuit is a path that stays open even when no traffic is using it.
Most
virtual circuits are set up when needed and discarded when there's no more data to be
sent.
Typically, permanent virtual circuits are set up between multiple backbone switches
and
between switches and devices with heavy network traffic.

From the start, ATM was designed to be switched, use virtual circuits, provide quality
of
service (QOS) guarantees, and support both fixed- and variable-rate transmissions --
no small
order! ATM's QOS model allows applications to request a guaranteed transmission
rate
between their host system and their destination, whether it's on the other side of a
single switch
or around the world. Each ATM switch between the two negotiates with the next
switch until a
path is found that guarantees the requested bandwidth.

If no such path is available, the application is notified and it can take appropriate
action.
Unfortunately, modern protocols and applications have no idea about QOS, so for
most LANs,
this is just one more very nifty but unused feature.

With all of ATM's sophistication, it's no wonder that people want to deploy it. But
actual
implementation of the technology so far has mostly been limited to telephone
companies who
designed it for other purposes.

For instance, ATM has no built-in broadcast facility (as with much of ATM, the idea is
there,
but the standards aren't). And although excessive broadcasts are the bane of network
administrators, some broadcasts are essential. Clients looking for servers have to be
capable of
sending out a broadcast "where are you?" message and waiting for a reply to start
sending
directed (unicast) packets.

The ATM Forum's initiative to make ATM suitable for the LAN environment is called
LAN
emulation (LANE). LANE takes a point-to-point-oriented ATM network and allows
today's
servers and clients to see the network as a broadcast-capable Ethernet or Token Ring
network,
warts and all, while still running IP (and soon, IPX). LANE is implemented by four
services:
the LAN emulation configuration server (LECS), the LAN emulation server (LES), the
Broadcast and Unknown Server (BUS), and the LAN emulation client (LEC).

When a LANE client attaches to the ATM network, it first talks to the LECS. Because
ATM
doesn't allow for broadcasts, the LECS is located at a "well- known address" -- a
special ATM
address that the ATM Forum set aside to be the default LECS address. The LECS
provides the
LEC with the address of the LES appropriate for that client. The client then contacts
the LES,
which serves to control the functions of the emulated LAN (ELAN) that the client is
joining.
The LES provides the BUS address and notifies the BUS that the client has joined, so
the BUS
can send future "broadcasts" to that client.

In order to use a non-native protocol on an ATM network, the client has to run an
LEC. The
LEC provides a bridge between the true ATM nature of the network and the
traditional LAN
topology that protocols such as IP expect. Because LANE is only emulating Ethernet,
it can
avoid some of the older topology's pitfalls. Each ELAN can have a different packet
size,
allowing ELANs that are used to serve data to Ethernet workstations to operate at
Ethernet's
1,516-byte packet size, whereas ELANs that are solely for interserver communication
can use a
9,180-byte packet. All this is handled by the LEC.

The LEC also intercepts broadcast packets and sends them to the BUS's ATM
addresses.
When the BUS receives a broadcast packet, it sends out one copy to each of the
registered
LECs, which in turn translates it back to an Ethernet packet with the broadcast
address before
passing it to the client's protocol stack.

ATM's 5 bytes of overhead per 48 bytes of data means that it can only use about 90.5
percent
of its bandwidth for data -- so a 155Mbps ATM connection is really only supplying
140Mbps of
data -- and that's before taking into account the additional traffic to and from the BUS
and
LECS found in a LANE environment.

Yes, ATM is complicated, and as long as it's limited to LANE implementations, it's
probably
too complicated for widespread adoption. ATM's real hope for widespread adoption
lies in the
development of ATM-aware applications.

Fast Ethernet

Ethernet (10Base-T), for all its success, has never been a model of elegance. Pundit
Raj Jain
once likened Ethernet to a party where communication is accomplished by everyone
screaming
at once. Ethernet adapter cards have only a very rudimentary intelligence: All they
really do is
shout and check to see if someone else shouted at the same time.

Like its slower cousin, Fast Ethernet (100Base-T) uses Carrier Sense Multiple Access
with
Collision Detection. That's a mouthful of an acronym for a very simple technology.
When an
Ethernet device needs to send a packet, it first waits for a lull in network traffic. Then it
puts its
packet on the wire and simultaneously listens to see if any other device sent something
at the
same time, which would cause a collision and garble both packets. If there wasn't a
collision,
the device keeps sending as needed, always pausing at least a microsecond between
packets to
allow other devices a chance at the wire. If a collision did occur, both devices wait a
random
period of time and try again, repeating the process until no collision occurs.

It's because of this "everyone screaming at once" scheme that Ethernet, including Fast
Ethernet,
can never achieve its full speed, be it 10Mbps, 100Mbps, or faster. As traffic
increases, the
gaps between packets narrow and collisions increase. A good rule of thumb is that
Ethernet tops
out at about 70 percent of its potential bandwidth, and it can be reduced to a mere
fraction of
its capability if severely overloaded.

** NOTE: This story has been truncated from its original size in order to facilitate
transmission.
If you need more information about this story, please contact Individual at
1-800-766-4224. **

[10-05-96 at 19:50 EDT, Copyright 1996, InfoWorld
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext