SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : The New QLogic (ANCR) -- Ignore unavailable to you. Want to Upgrade?


To: J Fieb who wrote (27477)7/13/2000 2:33:08 AM
From: Gus  Read Replies (1) | Respond to of 29386
 
Thanks J. More on Classes on Service....

CLASS OF SERVICE: OPTIMIZING DATA TRANSPORT IN
FIBRE CHANNEL SYSTEMS

mcdata.com

EXECUTIVE OVERVIEW

Fibre Channel technology combines the best
features of traditional I/O channel technologies
with those of networking technologies. The two
worlds of I/O and networking have, until now,
been separated by distinct protocols and
distinct interconnect architectures. SCSI, for
example, would be a popular I/O channel
technology, and Ethernet would be a popular
networking technology.

Many of the traditional I/O channel interconnects
offer point-to-point links between devices, have
distance limitations, and support high data
reliability, low latency, and error detection. They
preserve data integrity as well as the sequence
of data, making them excellent for storage
applications. Host server-to-storage is the
classic I/O model.

Networking technologies, on the other hand,
have been typified by low-cost connections
among a very large number of devices over
campus-wide distances or greater. Under load,
when the network experiences high levels of
contention, these technologies will literally drop
frames. The downside is that these dropped
frames have to be resent, resulting in what is
called a "high packet retransmission rate." The
upside is that this type of architecture enables
simple, low-cost communications for local area
networks. Client/server and server-to-server
communication are classic networking models.

The Fibre Channel standard defines high-speed
serial connections capable of maintaining high
reliability between a large number of connections
across campus-wide distances or greater. It is
an excellent choice, not only for traditional
mass-storage applications, but also for newer
requirements that locate storage devices at
some distance from the host server. This
physical de-coupling of the traditional
host/storage pairing gives enterprises improved
availability, scalability, and serviceability,
because storage devices can now be physically
located for convenient access by service
personnel or to accommodate future expansion
plans, or they can be consolidated for easier
management. In addition, these channel-like
connections provide high reliability for mission-
critical server clusters and other data center-
class applications.

The Fibre Channel standard defines a fabric of
switched connections among multiple devices
where each device can be located up to 10
kilometers from the switch. End-to-end distances
are extendible by cascading multiple switches.
A single Fibre Channel switch can connect
servers, supercomputers, mass storage, and
workstations in a switched configuration similar
to a telephone exchange, while retaining
channel-like characteristics. This networking
aspect of the standard makes it especially attractive
for establishing simultaneous exchanges among
multiple devices. Compared to Ethernet networks,
Fibre Channel provides superior performance and
important feature enhancements such as flow
control and Class of Service.

A frame is the most basic element of a message in Fibre
Channel data communications. It consists of a 24-byte
header and zero to 2112 bytes of data. A series of related
frames, when strung together in numbered order, creates a
sequence which can be transmitted over a Fibre Channel
connection as a single operation. A group of sequences
which share a unique identifier is called an exchange. All
sequences within a given exchange use the same protocol or
set of data communication conventions, including timing,
control, formatting, and data representation. Frames from
multiple sequences can be multiplexed to prevent a single
exchange sequence from consuming all the bandwidth.
Fibre Channel uses credit-based flow control to prevent the
possibility of frames being transmitted so rapidly that the
buffers overflow and lose data. Flow control is a method of
exchanging parameters between two connected devices and
managing the rate of frame transmission. It enables
concurrent multiple exchanges. Some Classes of Service use
end-to-end flow control
to ensure that lost frames
are detected. All Classes
of Service use buffer-to-
buffer flow control, which
manages data between
devices on two attached
nodes along the
communication path, but
not all the way from the
source to the destination.
Credit-based flow control prevents congestion in the fabric,
so no more packets can enter the fabric than it can handle.
All Fibre Channel links offer full duplex communication,
meaning that frames can flow simultaneously in both
directions between any two nodes. Current Fibre Channel
products accommodate the transfer of data at 1 gigabit per
second. A full-duplex implementation means that data can
be transferred simultaneously both ways over two
unidirectional connections, at an aggregate rate of 2 Gbps.
The benefit of full duplex is that the receiving device can
be transmitting acknowledgments and data at the same
time it is receiving data. Ethernet and ATM are half-duplex
technologies.

Latency is a measurement of the time it takes to send a frame
between two locations. Low latency is a fundamental requirement
for storage applications and is typical of I/O channel technologies.
Fibre Channel connections are characterized by low latency,
which is associated with high performance and high efficiency in
data transport.

One example of a Fibre Channel connection is shown in
Figure 1, where two servers communicate with each other
through a Fibre Channel switch, which routes the data. Each
server uses an add-in
card called a host bus
adapter to connect its
system bus with an
external device. Server A
sends a sequence of
frames through its Fibre
Channel host bus
adapter to Fibre Channel
Switch B, which sets up a
temporary connection
with Server C and passes the frames on to that server. When
this type of connection is expanded to include more than one
switch and multiple nodes, it is called a switched fabric,
because it allows multiple simultaneous connections to carry
sequences among nodes.

Because the technology provides a single
infrastructure for both server-to-storage and
server-to-server connections, a single switch
can accommodate both types of applications.
This dual functionality reduces the overall
effective cost because an infrastructure based
on a single technology and fewer systems is
required to implement complex connections.

Fibre Channel is gradually becoming the de facto
connectivity standard for storage and server
interconnects. Storage applications are the
fastest growing market area. Dataquest projects
that, by the year 2000, more than 50 percent
of all multi-user storage systems will be
attached to host servers via Fibre Channel links.

CLASS 1: DEDICATED CONNECTIONS AND MESSAGE
CONFIRMATION

Class 1 establishes a dedicated end-to-
end connection through the fabric
between the source and destination
devices. If the destination device is
busy, the connection will not be
granted. However, after the connection
is made, it will be retained until the
communication is complete. With end-
to-end flow control, every frame is
acknowledged by the destination
device back to the host bus adapter on
the source device. Class 1 guarantees
that maximum bandwidth will be
available between the two devices and
ensures that frames will be delivered to the
destination in the same order they were
transmitted.

Class 1 reserves a full 100 percent of the
available bandwidth between two devices while
the connection is maintained. In applications
that call for transfers of large blocks of data,
such as scientific modeling or imaging, this type
of connection makes good use of the available
bandwidth. Also, it best serves applications
where the time required to set up the dedicated
connection is much less than the time required
to transmit the large blocks of data and all of
the bandwidth can be fully utilized.

On the other hand, if the application typically
sends sequences that use only 20 percent of the
available bandwidth, then 80 percent of the
bandwidth will remain idle and unused because
Class 1 disallows other connection requests. In
this situation, the Class 1 connection is said to
be "blocking" the fabric by tying up access to the
two devices. For a busy fabric with many
messages contending for access, this con-
stitutes an inefficient use of resources.

Historically, Class 1 was the first service to be
available in commercial products. For a period
of time, it was the most widely implemented
Fibre Channel connection. Class 1 guarantees
performance and in-order delivery of data
frames. Because of these two factors, some
believe that Class 1 service is necessary to
achieve reliable Fibre Channel connections and
the best performance.

Contrary to this belief, other Class of Service
options can offer better performance and better
use of available bandwidth. Today's Class 2 and
Class 3 switched connections let the IT
manager optimize installations and improve
total performance over that of Class 1 dedicated
end-to-end connections.

CLASS 2: SHARED BANDWIDTH CONNECTIONS WITH MESSAGE
CONFIRMATION

Like Class 1, Class 2 provides a robust link
between source and destination devices.
Sometimes called "connectionless," it uses
switched connections to create simultaneous
exchanges among multiple devices. Though no
dedicated connections are established through
the fabric, each frame is acknowledged by the
destination device back to the originating device
to confirm receipt.

Frames are routed through the fabric, and each
frame may take a separate route. The guarantee of
in-order delivery of frames is an option under the
Fibre Channel standard, but vendors like McDATA
have designed their Class 2 and 3 Fibre Channel
switches to provide this functionality. In-order
delivery is essential for the support of storage
protocols such as SCSI over Fibre Channel.
Because Class 2 allows the fabric to multiplex
several messages on a frame-by-frame basis, it
provides a more efficient use of fabric resources.
Since data delivery is confirmed, Class 2 is
excellent for mass-storage applications as well
as server clusters that support mission-critical
applications. This highly reliable, yet
connectionless, Class of Service is unique to
Fibre Channel and is one of the reasons why
Fibre Channel is expected by many industry
analysts to dominate the mass storage and
clustering network markets. With the Class 2
ability to share bandwidth, users can obtain all
the advantages of Class 1 with significantly
improved and more efficient performance.

CLASS 3: SHARED BANDWIDTH CONNECTIONS WITH MESSAGE
CONFIRMATION

Class 3 is similar to the Class 2 connectionless,
switched links, except its received frames are
not acknowledged. For this reason, Class 3 is
sometimes called a "datagram" service. It uses
buffer-to-buffer flow control, which prevents the
loss of frames between any two devices along
the data transport route, but does not support
true acknowledgment and end-to-end flow
control like Class 2.

Although the fabric won't lose data, the receive
buffers in the destination device can still be
overrun and data can be lost. In this situation,
an upper layer protocol must take over the error
recovery process and request a resend of the
dropped frames.

A Class 3 option, multicast, allows the device to
address a multicast group and broadcast a
transmission to multiple destination nodes. The
switch actually assumes responsibility for
distributing the same information to each
member of the multicast group. Multicast is
important for applications like video servers and
cluster interconnect. McDATA's Class 3
switches use multicast to provide efficient
delivery of information at a level of functionality
approaching the proposed Class 6 operation.

Class 3 optimizes Fibre Channel's classic
networking mode. As with ATM and Ethernet
networks, Fibre Channel allows the network to
discard frames if errors are detected. As such, it
is an excellent choice for networking applications
(e.g., TCP/IP) that provide their own integrity
checks in their upper layer protocols and for
automatic retransmission of lost data.

Class 3 appeared earlier on the market than
Class 2 because of demand for loop
applications. Fibre Channel Arbitrated Loop
(FC_AL) works well with a Class 3 environment
because its loop topology keeps sequential
frames in order without intervention from the
transmission protocol. In a loop, all of the
attached devices share the loop's bandwidth.
Because of this, loop connections tend to be
simpler than fabric connections, and the
processing associated with traffic routing is
distributed between attached devices. Class 3
has emerged as the preferred configuration for
FC_AL loop applications and for use in the
design of disk storage subsystems.

The Fibre Channel standard allows out-of-order
delivery of frames and relies on upper layers of
software to re-order frames in the proper
sequence. McDATA switches, however, provide
in-order delivery to ensure that all frames arrive
in the proper order, even in multi-switch fabrics.
This is accomplished within the McDATA
shared memory architecture that is used for
both Class 2 and Class 3 operations.

INTERMIX

Initially defined as a subset of Class 1, intermix
allows a few Class 2 or Class 3 frames to take
advantage of idle moments in a Class 1
connection when it isn't transmitting data. In
actual practice, Class 1 connections tend to
block the fabric and prevent most other
transmissions from getting through. Intermix
was developed to address the inefficiencies of
Class 1 and to make it more like Class 2 and 3.
By definition, Class 2 and 3 support intermix.

CLASS 4

Class 4 is intended mainly for multimedia
applications. It divides Fibre Channel
bandwidth into 256 subgroups or virtual
channels for guaranteed fractional bandwidth
with Quality of Service levels. The intent is to
create true non-blocking, continuous, packetized
service across a Fibre Channel fabric. Specific
applications include allocation of guaranteed
bandwidth by department within an enterprise,
as well as Quality of Service support for
multimedia applications such as video.

CLASS 5

In Class 5, also called true isochronous service,
distribution of files across the network requires
specific timing between nodes. It will be useful
for mixed data and voice transmissions or for
applications that require the data to be
displayed immediately as it arrives, without
buffering. Both the broadcast and video markets
will be interested in Class 5, because it allows
text, graphics, video, and voice to be distributed
separately across the fabric and then
reassembled based on timing control signals
from a master clock.

CLASS 6

Class 6 is a variant of Class 1 and designed
specifically for audio, video, and broadcast
applications that require multicast functionality.
After a Class 6 connection is established, it is
maintained and guaranteed by the fabric until the
connection initiator concludes the transmission.

SUMMARY

While Class 1 offers highly reliable, dedicated end-
to-end connections, its tendency is to block the
fabric and monopolize both the sending and
receiving devices. Other Class of Service options'
Class 2 and 3 provide excellent reliability and
greater system-wide data throughput.

Today, most Fibre Channel devices support
multiple classes. However, most users will end
up using the default class defined by the
products they buy. Knowledgeable users, if
given the right tools, should be able to select a
class that will maximize the interconnect
for their applications. When Fibre Channel
becomes entrenched in the market, the majority
of users will not be concerned about what Class
of Service they are using, because the device
drivers and utilities will manage the Class of
Service used.

McDATA Corporation provides innovative data
center networking solutions that optimize
performance for large-scale, enterprise applica-
tions. Its line of Fibre Channel switches
currently supports both Class 2 and Class 3.



To: J Fieb who wrote (27477)7/13/2000 4:41:37 AM
From: Gus  Respond to of 29386
 

The data center will want components that have lots of redundancy.

I agree. The R in RAID stands for Redundant, but equally as important are issues related to the failover paths and costs taken in conjunction with the requirements of the application.

I watched the SUNW webcast on the T3. In our town we have a few streets in the historic part of town made of brick.SUNW concept is similar in that they rolled out one of the little bricks and then at the end they showed a giant wall built of the same unit. I think they will sell them.

We're going to disagree on this point, J. so be forewarned. <g>

I watched it too, but I also watched Sun rollout the T7000, Sun's big-box EMC-killer specifically designed for the last pre-Y2K corporate upgrade cycle, a few years ago. If you recall, they screamed superior cost-benefits and fair-pricing to the heavens but ultimately failed in the effort primarily due to the fact that a storage vendor doesn't just waltz into the datacenter with a shiny big box and reams of fancy press releases especially when all the big shops were grappling with the labor-intensive Y2K problem.

What compounded Sun's problem was that their big box was delayed for nearly a year due to persistent problems with the microcodes. That's the telltale sign of a vendor not well versed in the complex I/O issues involved in large-scale computing environments which typically have a rich mix of networked mainframes, UNIX servers and Windows servers.

Just so you know where I'm coming from, this is my general sense of the enterprise computing market.

Mainframes remain a flat market because MIPs are going down much faster than any other platform. It now looks like IBM is changing the rules of the IBM-mainframe compatible market (70% market share) by changing its pricing policies to improve the overall value proposition of the venerable mainframe.

In terms of units sold, the WinTel platform started to exceed Unix in 1996/1997; although, UNIX still retains a substantial lead in terms of total revenues, with Sun's SPARC/Solaris accounting for the dominant 30%-32% market share.

That's the backdrop for Sun's failed attempt with the big box that preceded their latest modular storage offering. Back to the basic brick strategy, as you aptly put it, but that uncharacteristic failure with the datacenter I/O issues is like red meat in a sea full of sharks.

Now, I realize that some of the folks here think that Scott McNealy can walk on the swampwater near Sun HQ, but probably more a sign of frustration than anything else, his statement earlier this year that "STORAGE IS JUST A FEATURE" may have already sealed the fate of its latest storage initiative.

The same ole, same ole, in other words. Sun may sell more storage to its Solaris installed base -- again, the dominant UNIX platform with 30% to 32% market share -- but it won't make much headway outside of it for two reasons.

1) It still is not a credible vendor for storage systems that HAVE to support mainframes, UNIX and Windows.

EMC surveys customer base regularly and what consistently comes up are consistent numbers showing that 90% of its customer base are still developing applications for the Wintel platform while 70% of the same customer base are also developing applications for the UNIX platform (30% to 40% Solaris).

2) An even more fundamental problem for a vendor selling servers and storage as a package is this inexorable trend.
What makes Sun's position ultimately precarious is that it is the only vendor among the global top 5 that is not selling the high-volume Wintel platform. Try as they might, even the DOJ O/S couldn't kill off Wintel.<g>

IT BUDGETARY SHIFTS AND KEY IT MILESTONES

1996 - 25%/75% Storage/Server IT budget
1999 - 50%/50% Storage/Server IT budget

2003 - 80%/20% Storage/Server IT budget (IDC)
2004 - 75%/25% Storage/Server IT budget (Dataquest)

2005 - Intel's 1 billion interconnected PC milestone.
2005 - Lucent's 250x network improvement milestone.

I don't think that we disagree on the intuitive point about elasticity -- more cheap PCs will be sold than expensive PCs; more cheap servers will be sold than expensive servers.

I also don't think that we disagree on the intuitive point that the mainstream doubling of processing power predicted by Moore's law every 18-24 months and the 250x increase in network bandwidth predicted by Lucent for the next 5 years
will create a tremendous demand for storage networks, which assume natural primacy in the darwinian IT scheme of things because it contains the data lifeblood of any organization.
Strictly top-down inner sanctum stuff, in other words.

Going back to my second point about Sun's predicament, which is a predicament that it shares with other server vendors, given those variables, how does a server company reorient itself when its cost structure and its sales force are geared towards the homogeneous sale of the server and the storage system, which makes it difficult to compete with independent storage vendors that already have a head-start supporting all kinds of mainframes, UNIX and Wintel servers?

Sun's latest attempt is that they are going to recruit a special sales force to go after EMC accounts.

Mission Difficult or Mission Impossible?<g>

To better appreciate that exquisite conundrum, contrast EMC's ability to support over 35 operating systems with Sun's ability to do so outside of its loyal Solaris base.
Some estimate that the truly hardcore (Apple caliber) Solaris base is less than 10%

IBM used to come in a poor second with support for only 5 operating systems until it narrowed that missile gap with the most recent deal with Compaq, which pooled their
billion-dollar [gulp] interoperability labs and provides cross-support for servers (including Alpha) and storage systems.

You also have the OEM relationship between HWP and Hitachi. A charismatic CEO with impeccable credentials (Enron/Lucent), speedy big boxes from Hitachi, and HWP's pedigree as the THE Silicon Valley pioneer may yet get HWP's storage systems inside the data center.

As an aside, HWP middle-management has been making noises about how long it will take Intel's Itanium (co-developed with HWP) to penetrate the data center. Put with some circumspect from the Intel point of view: too long. This gives you an idea of the high barriers of entry to the data centers from the processing unit side, the storage side and from the networking side.

I don't agree with the storage arguments on SI that say So and so can't sell against company x. The market data suggests that both SUNW and EMC could take market share from lots of others, double their sales to current customers, thus growing for some years without coming face to face in the arena. The incumbents tend to remain in office a long time. Everyone supports switched fabric now except for IBM and it looks like they will limp out of the gate next month.

We agree that the market data shows that storage networking clearly is an early stage market where many will prosper. I'm going out of my way to show you you where we disagree so it doesn't get in the way of tracking the competitive positioning of the different players in the marketplace.
The fibre channel boards have been the richest source of investing ideas over the last 4 years so I don't want to skunk it with even a hint of the type of religious wars that pollute most message boards.

Anyway, from the recent parade of boxes from EMC, IBM, Sun, HWP, Hitachi and Compaq, for example, it looks like HWP/ Hitachi came in with the biggest box (37 TB) and the pre-Infiniband inside-the-box switched fabric innovation so it's going to be interesting to see how the players will try to leapfrog each other especially with Infiniband coming down the pike.

Incidentally, I believe that Infiniband is going to take longer than most people believe, but it will be probably be bigger than most people believe too.