SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Ascend Communications (ASND) -- Ignore unavailable to you. Want to Upgrade?


To: djane who wrote (47013)5/16/1998 4:53:00 AM
From: djane  Respond to of 61433
 
This Year's 10 Hottest Technologies in Telecom (Part I) [ASND references in Part II]

telecoms-mag.com

What's on the radar screen this year for promising, new technologies
that will drive the next wave of development? Read on and find out.

Patrick Flanagan

Technologies come and technologies go. Mostly, however, like the
proverbial Top 40 hits, they keep on coming. At this juncture, the
industry shows no signs of lessening the attempt to find better ways to
conserve bandwidth, optimize transmission rates, and find new, more
efficient, and less expensive ways to accomplish basic communications
tasks. While it's true that there is some commodization taking place in
local area and campus infrastructures, there are still plenty of challenges
ahead as Internet telephony continues its paradigm-altering migration into
the public network.


As is the practice every year, we talked with a wide variety of experts,
including the heads of leading telecom consulting firms. Their
recommendations varied widely, but our qualitative approach clustered
around a specific set of technologies where agreement was reasonably
assured. In commenting on the technologies themselves, we have also
tried to mention, where applicable, the leading-edge vendors who are
putting these technologies out in the marketplace.

Here in no particular order are Telecommunications magazine's
10 hottest technologies for 1998:

1. Multilayer Ultrafast Routing/Switching

2. Erbium Doped Fiber Amplifiers

3. Packetized Voice

4. LMDS for Broadband Interactive Services

5. Wireless Local Loop (WLL)

6. Internet Virtual Private Networks

7. Web-Enabled Call Centers

8. XML (eXtensible Markup Language)

9. TMN Technology

10. Java-Based Network Management

Multilayer Ultrafast Routing/Switching: The Need for Speed

With traffic on the Internet doubling every six months, backbone capacity
continues to be an urgent carrier concern. The newest solution is a new
generation of multilayer Gigabit and terabit switch routers that are
emerging from the big four--Bay Networks, Cabletron, Cisco, and
3Com--as well as a number of start-up ventures. One impact is that the
differences between routing and switching are disappearing on IP
network backbones. The goal is to put as few switches and/or routers
within the backbone infrastructure as possible, while increasing line
speeds into the Gigabit and terabit ranges. Terminology to describe this
emerging technology accurately is a problem. Currently, any number of
descriptors are being used, including Gigabit switch routers (Cisco),
multi-technology, Gigabit switching (Digital), terabit switch/router (Avici
Systems), and Layer 3 switching (13 vendors ranging from Acacia
Networks to Torrent Networking Technologies). The best umbrella term
to describe the technology involved is multilayer switching/ routing
because Layers 1 to 4 (SONET, frame relay, IP, and TCP) are all
involved, as is what at least one vendor calls "Layer 0" for the
deployment of wavelength division multiplexing (WDM) on fiber optic
backbones.

In addressing increased IP backbone capacity, the vendors are focused
on three areas: performance, scaleability, and quality of service (QoS).
The primary performance concerns of latency and network congestion
are being addressed in terms of the impact on service levels. Recovery
from congestion and efficient allocation of bandwidth are driving product
development. Scaleability is being improved by technologies and
platforms that can grow rapidly and elegantly as end users require more
and more bandwidth from the core network. No longer is it possible to
add another router or switch to resolve regional bandwidth problems.
QoS is the most pressing concern. Without it, service providers cannot
offer value-adds such as high-priority transmission of mission-critical
applications and managed LAN/legacy services. The solution is for
switches and/or routers to identify different service classifications of
traffic and handle each efficiently without incurring large amounts of
network overhead.

One emerging technology for addressing these performance, scaleability,
and QoS issues is packet-by-packet Layer 3 (PPL3) switches that act as
full-blown routers at speeds up to 7 million packets per second. Another
approach is cut-through switching, which doesn't peer into each packet
as PPL3 does, but rather determines the destination from the first few
packets in a series and then switches the flow to Layer 2 once a
connection is made. Nick Lippis, president of Strategic Networks in
Rockland, Mass., likes PPL3 because "it means really fast routing
without having to change anything on your network." Other schemes
abound, such as Avici Systems' Direct Connection Fabric (70-Gbps
routing), the Pluris Massively Parallel Router (622 Mbps), Neo
Networks' StreamProcessor (2.5 Gbps), and Torrent Networking
Technologies' Shared Memory Switch Fabric (millions of packets per
second). Argon Networks and Nexabit Networks are also developing
products with similar capabilities. "These companies are coming at the
architectural approaches for Gigabit and terabit rates from different
angles, depending on their backgrounds, while they're all trying to crack
the same nut," said Brendan Hanigan, an analyst with Forrester Research
in Cambridge, Mass.

Erbium Doped Fiber Amplifiers (EDFA): Enter the All-Optical
Network

Last year, wavelength division multiplexing (WDM) was a hot
technology. Already, it is running up against limitations as technologies
such as multilayer switching demand greater bandwidth. Today as many
as 40 channels can be multiplexed on a single channel--a vast
improvement from the original WDM capacity of four. However, each
time the number of optical channels is doubled, there is a corresponding
3dB reduction in signal gain. Enter erbium doped fiber amplifiers (EDFA)
with the ability to amplify several optical channels in the erbium passband,
creating what is becoming widely known as dense wave division
multiplexing (DWDM).

There's new EDFA technology becoming available as well. Silica-based
EDFA has been around for several years as a displacement for the need
for costly electrical regeneration of an optical signal. Now the EDFA
flat-gain amplifier with advanced filtering technology to provide a flatter
gain curve is becoming available commercially. In the pipeline from
Lucent Technologies' Bell Laboratories is a new amplifier based on silica
EDFA and capable of supporting 100 wavelength channels having
100-GHz spacing. It uses two sections: One is optimized for
long-wavelength channels beyond L band (1565 nm), while the other is
optimized for conventional C-band channels (1525 nm to 1565 nm). The
addition of a second gain-equalizing, fiber-grating filter provides uniform
gain over the entire 80-nm spectrum.

"EDFA is a key enabling technology in the continued evolution to the
all-optical network," said Thomas Fuerst, a Lightwave product marketing
manager for Alcatel Telecom of Richardson, Texas. One direction will be
to expand DWDM network technology into metropolitan-area networks
and eventually to LANs and local access.
In addition to Lucent and
Alcatel, several vendors are entering the EDFA marketplace. Ortel
Corp. of Alhambra, Calif., has a number of fiber optic transmission
products, particularly EDFA for 1550-nm broadband transmission and
continuous wave pump lasers for EDFA. ADC Telecommunications of
Minneapolis incorporates EDFA into its Homeworx HWX transmission
platform. Optigain in Peace Dale, R.I., produces Model 2100 EDFA for
SONET/SDH and DWDM systems as a pre-amplifier, booster, or
in-line amplifier.

Packetized Voice: Telephony By Other Means

Packetized voice has reached the point that voice packets are delivered
by cell or packet transport with the integrity and quality users expect
from the public-switched voice network, and it is still rapidly evolving.
Voice traffic is launched and received PC-to-phone, PC-to-PC, and
phone-to-phone, with each possibility offering a slightly different
application of the packetized voice technology. The marketplace is
currently focused on pirating consumer voice traffic from traditional
carriers by offering significantly lower prices while using the public
Internet as the primary transport mechanism. The pioneers are start-ups
such as Delta Three and VIP Calling, companies that worked the bugs
out of the technology and educated the public. Now Internet service
providers (ISPs) are entering the IP phone business. They see it as a way
to increase revenues beyond the commoditized $20 per month they
receive from all-you-can-eat Internet access services.

The real potential lies in the corporate use of packetized voice technology
over enterprise networks, particularly virtual private IP networks.
Two
examples of this are currently in operation. A pioneer in this area,
NetWorks Telephony Corp. of El Segundo, Calif., offers a
PC-to-any-device calling service that delivers calls globally on and off the
Internet. The traffic is carried over Infonet's managed worldwide
backbone network on which voice packets are prioritized using a
proprietary multimedia cell technology. NetWorks Telephony is primarily
marketing its services to ISPs, including the building of gateways at ISPs'
points of presence (POPs)--a cost savings for ISPs that want to get into
the IP telephony business. Infonet, also of El Segundo and which was
spun off NetWorks Telephony as a separate entity, uses the same
backbone for its Integrated Media Services that target multinational
corporations; the company positions it as the "world's largest integrated
voice, fax, and data network offering service and local support in more
than 35 countries."

There continues to be a lot of debate about the viability of packetized
voice services. From the IP telephony viewpoint, Dataquest analysts
predict a $3 billion market by 2001. The corporate user world is
interested, according to analysts at Forrester Research, who found that
42 percent of 52 Fortune 1000 telecom decision-makers surveyed
planned to "experiment with" IP voice and fax by 1999. Whether or not
these predictions come true depends a great deal on how well the
switched voice carriers defend their services. "These are the cheapest,
highest quality networks in the world," said Thomas Pincince, founder
and executive vice president of New Oak Communications, which is now
part of Bay Networks.

LMDS for Broadband Interactive Services: Cable
Modems--Watch Out!

Local multipoint distribution service (LMDS) is a wireless, two-way
multichannel data, video, and telephony technology that could provide the
long-promised, ubiquitous broadband telecom nirvana envisioned for the
now rarely mentioned information superhighway. LMDS occupies one of
the broadest expanses of radio wave spectrum devoted to any one
service, a bandwidth of 1.3 GHz surrounding the 28-GHz Ka-band. This
is a much higher frequency than most existing wireless applications. Baud
rates are in excess of 1 Gbps downstream and 200 Mbps upstream, and
since LMDS uses a very small cell configuration (2- to 7-mile radius), it
is able to polarize and reuse spectrum in a highly effective manner over a
small area. In addition, this enormous bandwidth means that LMDS is
not hampered by the interference found with other wireless systems.
(Line-of-sight blockages occasionally obstructed signals and storms
caused "rain fade" in early deployments. Newer versions of the
technology appear to have overcome these drawbacks.) Since there are
no wires or cables to deploy and maintain, the economics of LMDS are
one of this technology's strongest points. But unlike mobile wireless,
LMDS works only between fixed points--offices, towers, homes, and
other structures. Total sales for subscriber and network equipment are
projected by Insight Research Corp. of Parsippany, N.J., to be $3.1
billion in 2001.

The initial uses for LMDS are to distributed cable TV in areas where
in-ground infrastructure is too expensive to install. Related to this is the
use of LMDS as a less expensive way of bringing competition into
entrenched cable TV and local telephony markets. The longer range
prospect is for LMDS to blossom into the technology for linking
corporate campuses to the enterprise network at speeds rivaling those
now offered by fiber optic networks.
The critical step of FCC licensing is
now completed with an auction that netted $578.6 million. Venture
capital-fueled start-ups dominated the auction, as was expected, because
they got a 45-percent discount. Of the top 10 bidders, only US West
Communications in 10th place was an established carrier. In part, this is
because local exchange carriers, particularly the RBOCs, are only
allowed to purchase 10 percent of the LMDS spectrum in their service
areas. The top five bidders were WNP Communications, NEXTBAND
Communications, WinStar LMDS, Baker Creek Communications, and
Cortelyou Communications. WNP and NEXTBAND bid in excess of
$325 million for more than 2 million POPs, compared to $156 million for
the other eight top bidders.

Predictions about when LMDS will become widely available vary
greatly, particularly in geographic areas with a high density of existing
broadband capacity. "Realistically, 1998 is a year of definition and
positioning, and there will be many false starts. But next year, we will get
a much better picture of what LMDS is and what it's useful for," said
Richard Sfeir, director of marketing for CommQuest Technologies in
Encinitas, Calif. Laurence Swasey, an analyst with Allied Business
Intelligence in Oyster Bay, N.Y., predicted that business users will be
pragmatic, "looking to utilize LMDS if there are no other high-speed
carriage options available."

The marketplace will get more competitive as well, with wider availability
of cable modems, DSL, satellite systems, and wireless local loop. LMDS
has an edge because it is "viable at low-take rates," which is industry
lingo for the ability to make money with a low percentage of potential
subscribers. The start-up costs can be as low as $16 per household
within a typical service area with a population of 13,000, including seven
business cells and 22 residential cells. Profitability could be obtained in
the third year if 60 percent of major employers and 10 percent of
residences sign up. Those are big ifs, but they present a much rosier
scenario than can be constructed for any of the current in-ground
broadband alternatives.

Wireless Local Loop: Getting Unwired

The idea behind wireless local loop (WLL) is "think small," which is the
exact opposite of the usual wireless logic. At this time, there are three
primary uses for WLL: to provide advanced services in urban areas and
within business complexes, to replace wireline services in residential
areas, and to bring telephony inexpensively to remote and underserved
areas, particularly outside the United States. The technology comes in
four flavors: analog, digital cellular, proprietary (broadband CDMA), and
low-power, or cordless, radio systems. Today and in the future, the
focus is on broadband CDMA and low-power radio except for
emerging, or "first phone," countries. The universal strength of all forms of
WLL is significantly lower costs than wireline services. On a per-line
basis, the cost for WLL ranges from $500 to $1000 and is decreasing,
while wireline telephony provisioning can run as high as $2500 per
subscriber.

The big WLL push for the future is in wireless office solutions deploying
low-power radio technology. Hughes Network Systems, Ericsson,
AT&T Wireless, AG Communication Systems, RogersCantel, and others
are actively launching in-building systems. The workforce can be reached
at anytime over a portable handset anywhere within the coverage area,
within a single building or over a campus. When the user leaves the
coverage area, the handset continues to function by being handed off to a
public wireless carrier on either 850-MHz cellular or 1900-MHz PCS.
In residential areas, WLL is being marketed as an alternative to wireline
POTS, but one with greater available bandwidth for data, particularly
Internet access. The hope is that the convenience of a single wireless
phone installation for use in and out of the home, combined with lower
rates for in-home use, will cause customers to abandon their twisted-wire
POTS connections. Since 1995, AT&T has offered its AirLoop system
as a way of meeting competition and curbing the maintenance and repair
costs associated with traditional wireless infrastructure. Now a large
number of manufacturers are involved, including Lucent Technologies,
Qualcomm, Motorola, and Nippon.

Estimates vary on how large the market will be for WLL. MTA-EMCI, a
research firm in Washington, D.C., projects 60 million installations by
2000. In developing nations, the estimate is $125 billion in revenues by
2004, plus another $10 billion from the industrially developed group of
seven (G7) nations. Melanie Posey, a senior analyst with Northern
Business Information in New York, predicted that "the market will begin
to take off after 1998, as an increasing number of wireline operators
incorporate WLL in their network expansion plans and as new operators
in competitive markets try for rapid market entry and value-added
mobility." She expected WLL to account for 17 percent of all local loop
construction by 2000, but noted that this number could be as high as 30
percent "if all cost, technical, and regulatory obstacles suddenly clear up."



To: djane who wrote (47013)5/16/1998 5:02:00 AM
From: djane  Respond to of 61433
 
This Year's 10 Hottest Technologies in Telecom (Part II) [ASND references in Part II]

telecoms-mag.com

Internet VPNs: The QoS Conundrum

Today's data communications traffic is beginning to resemble switched
voice traffic--for a lot of reasons: Users expect to be connected at will to
the enterprise network from their homes and while traveling; there is an
escalating need for communications among business partners, customers,
and suppliers; and there are increasing demands for access to the public
Internet. Existing corporate private data networks cannot meet this
demand for ubiquitous connectivity. The Internet VPN (virtual private
network) is emerging as the new WAN architecture by carving out a
private passageway through the Internet while using the public Internet
backbone as an appropriate channel for private data communication. The
objective is to create a single, seamless IP network for mission-critical
applications as well as remote access, closed user groups, public Internet
access, Internet-based commerce, and a Web presence.


"This requirement for pervasive data communications connectivity cannot
be met by existing corporate private data communications networks,"
said Michael Kennedy of Strategic Networks in Boston. Therefore, ISPs
will become strategic suppliers, a move that is "inevitable," he said. The
ISPs are pulling together the technology required to implement Internet
VPNs in terms of backbone architecture, security, and
configuration/management software. Among the providers are six major
U.S. ISPs--AT&T, WorldNet, PSINet, GTE Internetworking (BBN),
networkMCI, Sprint, and UUNet. Global ISPs are emerging as well,
particularly Infonet, EQUANT Network Services International, Global
One, and IBM Global Network.
The key technology for successful
delivery of the Internet VPN is interoperable end-to-end QoS guarantees
for mission-critical traffic. "Too many QoS standards have been
proposed," Kennedy said. He cited RSVP, Ipv6, ATM, QoS, and
MPOA and predicted that a compromise of RSVP for signaling and
ATM for wide area QoS delivery will prevail. "This confusion over
standards will delay widespread adoption of Internet VPNs until the early
2000s," he said.

Not all of the ISPs agree. Infonet, by partnering with Nortel and
deploying its Magellan Passport switches across its private backbone,
now offers a hybrid Internet VPN in 38 countries with three classes of
service options. EQUANT has deployed the same technology across its
global backbone. Cisco, Bay Networks, Ascend, Cabletron, 3Com, and
many others are developing end-to-end QoS technology. The flaw in the
Internet VPN is the NAP (network access point), where traffic crosses
from one provider's backbone to that of another. "Once you hit a NAP,
pray," said Timothy Kraskey, marketing vice president for Ascend's core
systems division. There's a strong motivation to resolve the few stumbling
blocks to widespread availability of Internet VPNs. While flat-rate
Internet service is now a commodity, Internet VPNs are customized to
meet specific needs. "This provides a now-missing opening for an ISP to
achieve value pricing through targeted offerings," said Kennedy.


Web-Enabled Call Centers: A Match Made in Heaven

The marriage of call centers and World Wide Web sites is a match made
in telecom heaven. The Web page visitor, the potential purchaser, or
dissatisfied customer wants human contact when making a purchase or
resolving a complaint. The dream solution--a mouse click on a Web page
"talk" icon from a multimedia PC--creates a packetized voice channel, or
hot link, to the call center over the same circuit in use for the Internet
connection. "This is the modern equivalent of the door-to-door
salesman's foot in the door," said Kennedy of Strategic Networks.

The technology behind Web-enabled call centers is essentially the same
that permits IP telephony over the Internet. A key component is the
sound card now routinely supplied with a multimedia PC and the device
that converts the analog voice signal into IP packets. Near toll-quality is
achieved at 5 kbps (compared to the public-switched network's 64-kbps
standard) through digitizing algorithms and IP packet switching that
intrinsically exploits the pauses and silences of speech.

Products for Web-based hot links are rapidly rolling out. Among the
recent announcements are MCI's Call Center Connection for
800-number business customers; Infonet's GIS voicelet service; the Call
Center solution developed by Executone Information Systems in
collaboration with VocalTec Communications and Dialogic Corp.; and
Siemens GEC Communications Systems' Call Server. AT&T and Sprint
also have announced Web-enabled call center products.

XML: Raising the Web's IQ

One of the better- kept secrets of the Web is that hypertext markup
language, universally known as HTML, is pretty dumb. There's a better
option ready to debut: eXtensible Markup Language, or XML. The fatal
flaw in HTML is that it has no attributes for handling the business world's
complex and endless demands for searching the Web. HTML makes
Web pages easy for humans to read but hard for search engines to
decipher in a meaningful way, particularly the pinpoint culling of
information to the specifications of demanding Web surfers who want 10
highly specific references, not 150,000 random keyword references.

XML will clear the way for countless new activities and applications that
are now too cumbersome. This is because applications require the Web
to mediate between heterogeneous databases, to distribute a big portion
of the processing load from the Web server to the Web client, or they
require the Web client to present different views of the same data to
different users. One example is the purchase of discount airline tickets.
An intelligent agent will be able to search among XML price "tags" for
the most favorable fares and trade in tickets when lower fares become
available. XML developers will use these tags to describe Web page
contents according to agreed-upon categories, making searches much
more productive than the keyword criteria used today.

CommerceNet, a nonprofit Palo Alto, Calif.-based organization
promoting commerce on the Internet, is the driving force behind XML.
According to Chairman Marty Tenenbaum, XML is a "silver bullet, with
the real possibility of fundamentally restructuring the way a given industry
works." XML was created last year by programmers working for the
229-member World Wide Web Consortium (W3C) as a way to simplify
standard generalized markup language (SGMO), or ISO 8879. It omits
the more complex and rarely used parts of SGMO, making it easier to
use on a widespread basis. While the W3C is unsure exactly when XML
will be finalized, it will be this year. The big boost will come when
Microsoft's Internet Explorer 5.0 and Netscape's Navigator 5.0 appear.
Both will support full-blown XML, while Explorer 4.0 now supports
some XML applications. Microsoft is using XML for its new Channel
Definition Format for Web broadcasting and Netscape is integrating it
into the Gemini Web-design software suite, which is to be the
centerpiece in the Apple and Sun network computer platform they hope
will challenge the Window's de facto standard.

TMN: Toward A Smarter Network

With telecom networks becoming more intelligent, distributed, and larger
every day, a new management network architecture is emerging to fill a
compelling need for making carriers more competitive. The International
Telecommunications Union/European Telecommunications Standards
Institute (formerly CCITT) has created a network management
framework and standards known officially as the Telecommunications
Management Network (TMN) for operations, administration,
maintenance, and provisioning. A primary objective is to support a
changing environment that includes voice services giving way to data and
multimedia, low-speed transport losing out to high-speed networks,
closed architecture being replaced by open standards, closed access
bowing to open access, and integrated systems supplanting
equipment-specific configurations.

TMN is an enormous and complex body of technical standards that
defines the functions and capabilities of the different levels of
communications management--business, service, network, and
element--and how they all inter-relate. The challenge for the telecom
management vendors is to assemble this complex body of standards into
viable products that can be readily deployed. The bottom line on TMN is
it clearly defines three essential areas of communications management: an
architecture that breaks "management" down into layers and groups of
functions (a triangle architecture), a methodology for defining the
management behavior of managed devices (GDMO, or guidelines for
definition of managed objects), and a set of protocols for management
information that defines a standardized interface at all seven open system
interconnection (OSI) model layers, with options for wide area and local
networking.

"For equipment suppliers, TMN standards are a mixed blessing," said
Keith J. Willetts, the president of NFM of Morristown, N.J., a global
consortium of service providers and suppliers focused on network
management. He believes that while standards help increase the market
for products, meeting the sophisticated TMN standards increases
product costs that customers are often unwilling to absorb. "This has led
to a slow acceptance of TMN standardization," he said. He believes this
is changing, however, with the need to integrate diverse management
systems into a cohesive whole, making it a mandatory requirement during
new equipment procurement. The strongest force behind TMN may be
the vendor community. NMF identified four component sets

and the companies actively developing them: TMN Basic Management
Platform (Bull, Digital, Sun, IBM, and HP); Multi-Network Bandwidth
Management (Newbridge, NCR, Siemens, MPR); Generic Alarm
Monitoring (Alcatel, Alantech, Digital); and CMIP/SNMP Interworking
(Bull, Sunsoft, Netmansys).

Java-Based Network Management: Using the Old Bean

"Java represents a real opportunity to consolidate existing disparate
network management protocols, while also broadening the view into all
layers from the applications down to the network device," said Mary
Slocum, a group marketing manager for Sun Microsystems, the Java
developer. Surprisingly, it's almost impossible to find anyone who
disagrees. The Java Management Application Programming Interface
(JMAPI) was specifically designed to simplify obtaining SNMP
information, particularly application management where the SNMP
footprint is too large for new hand-held communications devices. The
multilingual adaptability of Java matches up well with the management
needs of distributed networks. Of particular importance is the ability to
download executable code reliably to any device, while SNMP
programming languages such as Tool Command Language have not been
widely successful.

The implementation of JMAPI enables a network with devices running
small Java virtual machines (JVM), which in turn permits a central server
to push new software simultaneously to every device running a JVM
device. An unfulfilled, but quite feasible, promise is Java intelligent agents
that would travel the network monitoring performance. The current
emphasis is on assuring that JMAPI and other Java network management
tools can coexist with SNMP as well as leverage it to provide a radically
improved network management tool. One example is the management of
SNMP devices from the same protocol-independent manager as Java
devices. The SNMP management information base (MIB) describes the
network elements being managed. By translating these MIB definitions
and combining them into Java, programmers create reusable software
known as Java beans. These beans perform a wide range of functions,
such as spontaneous report generation. The ultimate objective is to have
Java beans become intelligent agents that lead to self-managed networks.

There are a number of hurdles facing JMAPI and its Java-bred tools.
One is fitting into current environments and working with the existing
SNMP infrastructure. A second challenge is legacy systems and
equipment that could slow adoption. The time frame for overcoming this
problem will be determined by how quickly network managers upgrade
incompatible and less intelligent hardware. The good news is that once
networks are JMAPI-enabled, there is an infinite ability to upgrade
execution software on equipment. Confusion in the marketplace also
holds the potential to slow adoption. Already, Java has become a
language

with several flavors, with Microsoft and Hewlett-Packard in particular
adding proprietary features. Microsoft, along with Computer Associates
International, Compaq, Cisco, Intel, HP and 3Com, is pushing
Web-based Enterprise Management (WBEM) as a "non-Java" protocol
that works with Web browsers. Java is a protocol-agnostic language,
raising questions as to how well WBEM and JMAPI will coexist.
Supporting JMAPI are Sun, Bay, Cisco, IBM, Novell, Platinum
Technologies, and 3Com. Sun, as would be expected, is working with a
wide variety of other vendors to overcome these potential hurdles,
including sworn enemy Microsoft. Microsoft is a licensed Java vendor
and has developed the JIT Compiler to recompile Java code as it is
downloaded to the network device, which speeds the process of
translation into native machine format.

Patrick Flanagan is a contributing editor with Telecommunications.
He can be reached via e-mail at pflanagan@mcimail.com.

1997's 10 Hottest Technologies

1. Web Broadcasting

2. Remote Access Servers

3. Extranets

4. Internet Telephony

5. Enterprise Network Directory Services

6. Web Site Management Tools

7. IP Switching

8. Wavelength Division Multiplexing

9. Digital Subscriber Lines

10. Higher Speed POTS Modems



To: djane who wrote (47013)5/16/1998 5:05:00 AM
From: djane  Respond to of 61433
 
This Year's 10 Hottest Technologies in Telecom (Part III) [ASND references in Part II]

telecoms-mag.com

In the Pipeline: More Technologies Due to Have an Impact

In addition to the 10 Hottest technologies, Telecommunications asked
analysts and industry experts to identify other developments that bear
watching.

Jim Mollenauer, president, Technical Strategy Associates,
Newton, Mass.:

Universal ADSL. The ADSL people have finally gotten their act together
and seem to have critical mass. ADSL varies from DSL in that it's a
different flavor with some different twists, such as not requiring
professional installation because it's splitterless; users just plug it into the
existing telephone jack. A disadvantage is that some phones will cause
noise in the line. To cure the problem, a user will have to purchase a
filter, which is not an expensive device and easy enough to work around.
When ADSL was invented, the expected application was video and this
didn't work out. For Web access, ADSL is awfully fast.

Packet over SONET. It simply means TCP without the ATM cells. One
reason the carriers put in SONET is because it makes maintenance of the
system relatively uniform. There are physical layer aspects that check on
the health of the network quite irrespective of what's running over it. The
carriers are going for packet over SONET, although access to dark fiber
can simplify by running ATM straight over the fiber without SONET.
Also, dark fiber is getting pretty scarce. Network loads due to the
Internet have gone up and sucked up the fiber that's already buried,
unless it was buried in the wrong place.

Rick Malone, principal, Vertical Systems Group, Dedham, Mass.:

Dedicated end-to-end transport over the Internet. A lot of the new
technologies, such as packetized voice, packet over SONET, and
ultrafast switching/routing, are being dictated by ISPs, not carriers.
Carriers are more concerned about standards and an orderly rollout of a
technology, whereas the ISPs are more focused on interfacing with end
users and providing tremendous performance without a lot of concern
about how this happens. Many IP networks today are a hodgepodge of
technologies with gateways that patch them together. ISPs are pushing
bandwidth providers to supply the facilities that guarantee dedicated
transport, particularly within the context of quality of service standards. If
Internet traffic performance is unpredictable to the end user, then the
ISPs will not be successful commercially over the long run.

Delivering raw bandwidth. Because of access to cable modems and
DSL, the general public is starting to gain faster access to the Internet
than most businesses. Businesses pay 10 times as much as the public for
a 56-kbps dedicated access line, which for $40 a month gets a cable
modem that can deliver 1 Mbps to 8 Mbps. Once fat access pipes such
as cable modems and DSL become more common, there will suddenly
be 40 million public users, as well as the now 8 million businesses, all
wanting raw bandwidth with at least 1-Mbps throughput. I don't see the
possibility of ever building a core network that is too big, and key
components will be huge switches and lots of fiber with advanced WDM
to split up the fiber.

Herschel Shosteck, president, Shosteck & Associates, Wheaton,
Md.:

Wideband CDMA has been proposed as a new 3G (third-generation)
technology by the Japanese and there's been a negative reaction by the
European Telecommunications Standards Institute to protect GMS. At
the same time, the regulators in Europe have allocated wideband CDMA
spectrum, puttingthe carriers in a quandary. They may not believe in 3G,
but they are compelled to say they favor this new wideband CDMA
technology so as not to alienate the regulators or worse--lose the
valuable new spectrum. The bottom line is a coming political barrage
supporting wideband CDMA as a marketing solution, while in actuality
many phone systems worldwide are barely into second-generation
wireless systems. This situation will be played out by GSM in its next two
iterations and CDMA in its next two generations both delivering on
third-generation promises, while wideband CDMA struggles to find a
market.

Multimode, multiband wireless services. Over the next three years, there
will be a shift toward multimode, multiband wireless phones and base
stations. These will neutralize the religious wars over the wireless
interface. Carriers in rural areas, driven by demand for roaming services,
will install base stations that will serve subscribers at 800 MHz and 1900
MHz, using TDMA, CDMA, GSM, and AMPS. Over the next five to
10 years, as the price of chips drop, we'll see combined terminals and
base stations. When the wireless industry started in 1983, a base station
measured 8 feet by 8 feet by 20 feet and a wireless handset weighed
about 35 pounds. Now, the base station is a 35-pound compact
package and the handset weighs a few ounces.

Michael Kennedy, director of consulting services, Strategic
Network Consulting, Boston, Mass.:

Enterprise-wide network management. We did a project where one
enterprise network required 10 network management systems to provide
a complete solution. This in itself is unmanageable. The business side is
not well coordinated either. There are a lot of billing and reporting tools
available that are not being used. There's great potential for full
integration of all the management systems needed today for an
enterprise-wide network. Web-based network management tools are
part of the answer when there's a common interface to all of these tools
for the network manager. Report generation needs to be made a lot more
efficient as well. With outsourced Internet VPNs gaining in popularity,
you've got to be able to monitor your network more fully than in the past
because you've turned it over to someone else.

ISP operational support systems. The challenge of ISP networks is to get
the cost of administering them down. They need the equivalent of what
the telephone industry calls OSS, or operational support systems. The
challenge is to scale the Internet without adding a lot of people and
resources. This will come through bringing to the ISP and Internet worlds
the kind of technology that the telephone industry has taken for granted
for a long time, particularly for facilities management and maintenance. As
it stands now, the ISPs have sky-high operating costs and are losing
money.

Fred J. McLimans, chairman and CEO, Current Analysis,
Sterling, Va.:

Microwireless. Voice connectivity is moving down to the scale of a
microcell in the form of home-based or small-office-based wireless. This
is not quite the same as wireless local loop. It's a step smaller, usually
communicating within a 50-foot radius of the microwireless cell. There's a
big challenge to come up with a low-cost device that would actually
allow users to go from the home-based cell to the area-based cell to the
global cell with a single device and billing mechanism. Once this is done,
there will be a tremendous demand for number portability, and eventually
everyone will get his or her own unique telephone number at birth. This
will eliminate dependence on land lines, with the exception of providing
interactive bandwidth for data applications. There will be a demand to
expand the cellular phone into a PDA (personal digital assistant) soon
after microwireless takes hold.

Speech recognition with computer telephony integration. This isn't real
sexy stuff, particularly when it's used in PBXs or voice mail. It gets
interesting when you take it to the level of computer telephony. Users are
experimenting with speech-to-text capabilities to increase productivity
and free those who write reports, memos, and presentations from their
keyboards. Further ahead is the coupling of speech recognition with
PDAs and Internet access capabilities, both as a navigational aid and for
directory and information look-ups in combination with the enterprise
data network and the PBX. We're beginning to see true integration of
server-based PBXs into the enterprise network. This really starts to
enable real voice and data integration and the ability to have voice
recognition-based gateways for the transport for voice across the
Internet or IP data service networks.

How We Decide What's Hot and What's Not

For all who participated in the selection process, we set down these
guidelines for qualifying as a hot technology:

Hot means capable of entering the mainstream of telecom
systems/operations within one to two years, while now having sufficient
development dollars and industry support to become economically
viable.

Telecom is defined broadly to encompass cable, interactive media, and
other emerging forms of communication, as well as the standard inclusion
of voice, data, and video transmission.

A technology is not a product, but a product can execute a technology.
To qualify, a new technology also has to have the potential to make a
large impact on existing systems, operations, or procedures.

Reader Action Poll

Do you agree with our choice of the 10 hottest technologies? Are there
others that should be on the list? E-mail your comments to
tcs@telecoms-mag.com. We'll publish the results in a future issue.

[TOP]



To: djane who wrote (47013)5/16/1998 5:15:00 AM
From: djane  Respond to of 61433
 
The Brave New World [Overview of 1998 worldwide telecom deregulation]

Bhawani Shankar, May 1998

telecoms-mag.com

Even for the telecom industry, which is used to seeing events hyped, the
past few months have been a ceaseless blur of conferences claiming to
unravel the mystique of deregulation. What exactly is this phenomenon
and what, indeed, was the dawn of 1998 supposed to bring? Do any of
liberalization's lofty implications have any immediate bearings on the
services customers get and pay for?

Despite the many hours of debate at the podium, the answer has to be
that there are as yet no telltale signs of this long-awaited, much-debated
deregulation. Change, however, is not totally absent. It emerges in
different ways, depending on the operator. Large, dominant incumbents
will stress that it is business as usual, but they are nevertheless preparing
new game plans. Newer and the so-called alternate operators may like to
believe that the free-for-all that was promised in Europe as of January 1
this year is theirs by right, and they will be more demanding when they
encounter resistance from their larger neighbors in arranging bandwidth
and interconnects. Tangible changes that are more relevant to end users
will take their time coming.

The European Commission, Europe's legislative body, passed a whole
range of directives aimed at achieving 1998 objectives. But the
Commission has not had a good record in enforcing directives within a
reasonable amount of time. According to the Yankee group, "Despite the
best efforts of the Commission, liberalization of telecom services in
Europe will continue at a very uneven pace and the gap between most
and least liberal nations will continue to be very wide for the foreseeable
future."

In fact, for an overwhelming number of customers in Europe, January's
deadline made little difference: Many markets, such as the United
Kingdom, have already made significant advances toward the European
ideal; at the other end of the scale are those, such as Greece, that have
deferred deregulation for up to five years.

Estimates of what customers can reasonably expect this year would need
to be based on several key issues. The first is an assessment of what the
incumbents perceive to be their realistic positions and threats, and how
they may react. In addition, the plans and objectives of competitive
operators will indicate what markets can anticipate. Both sets of results
would also need to be considered objectively in order to make a
calculated guess about the future.

Incumbent, Bent Double

For operators such as British Telecom (BT), still nursing its
disappointment over the MCI misadventure, it would seem that the
options are limited, but Chief Executive Peter Bonfield disagreed. "More
opportunities do exist, but we are not going to rush out and spend all our
money," he told a conference in London in late 1997. Speculation is rife
about what BT will do with the $7.4 billion that it earned by bowing out
of the MCI deal. New partnerships are being forecast with likely
candidates ranging from Bell Atlantic (currently acknowledged as the
most powerful Bell operating company), AT&T (still the largest U.S.
long-distance operator and brand name with which to reckon) and
Japan's NTT (full of technology and potential, but far from a surefooted,
streetwise international telco).

To make matters worse, the banking and analyst communities believe
almost without exception that the incumbent community is in for several
rude shocks over the short to medium term. "Incumbents will be at a
fundamental disadvantage. They have old technology, narrow product
ranges, higher unit costs, as well as regulatory restrictions," said Andrew
Harrington, the London-based managing director of Salomon Brothers.

"The majority will tend to be too complacent and there are sure to be few
big winners and some big losers," he cautioned.

In many cases, deregulation has meant price caps and these could remain
in force in some regions through the rest of the century. This was geared
to give competing operators sufficient financial muscle to invest in
technology and establish themselves before price caps are removed
entirely. "Being an incumbent is not easy," said Bonfield. "On the one
hand, shareholder value and price-earnings ratios are becoming key
performance indicators and on the other, the only way for us to move
forward is to move into international markets while retaining the United
Kingdom as a core business."

The incumbents need to be considered in the historical context of
deregulation. By the end of 1998, 80 percent of the world's telecom
markets are scheduled to liberalize. "This is a fundamental restructuring of
the world's fifth biggest industry," said Harrington, "and it has the full
backing of governments around the world." The combined directives of
the EU and the World Trade Organization (WTO), when fully ratified,
will cover more than 90 percent of global markets.
While some
governments may drag their heels and attempt to extend protection of
incumbents through regulation, licensing, and various other measures,
they will have to ultimately open up. The political costs of resisting reform
are too high and as such, reformation is an irreversible process.

For instance, telecom deregulation is becoming a reality even in Japan,
which has resisted it for as long as possible. Given the free market status
of its dominating electronics industry, the protectionist telecom sector had
been a sad contrast to its more liberal competitors. Having posted fervent
opposition (on the grounds that it would dilute its ability to retain its
leadership in leading-edge technologies), the incumbent, NTT, is now
reconciled to the growing competition. In fact, now that the tables have
turned, NTT's stated opinion is that restrictions should have been lifted
sooner. "We have been slow in opening up the communications industry,"
said Jun-ichiro Miyazu, NTT's president, explaining the key difference
between Japan's electronics and telecom industries. "Where there has
been restriction, progress has been slow. Protectionism does not help
commercialization, and we understand that we can no longer survive in
that environment."

A New Paradigm?

Telecom analysts, awed by the ground-breaking changes, are at a loss
for words to describe what the outcome might be. The words "paradigm
change" have been offered and, whatever the possible interpretations,
there is general agreement on what tangible effects may be seen at an
industry level. For one, about 3000 new operators are expected to be
created over the next couple of years; all of these companies will try to
whittle away at incumbent market shares and create new markets.

The majority of these new operators will be investing in technology that is
more advanced, cost-efficient, and flexible than what incumbents own,
including higher level SDH systems, IP over ATM, and so on.
In
comparison, incumbents will have no option but to retain PDH, copper,
and 800-MHz technologies in some parts of their networks. The
comparative cost-ineffectiveness of these technologies will have to be
reflected in bottom-line financial performance.

On the other hand, most alternative networks would be well capitalized
and have powerful backers. This means that new operators could offer
broader product ranges, have much lower unit costs, and be in a better
position to offer bundled services. As price caps melt away, the new
industry structure will, no doubt, give rise to a new economics. "In five
years' time, the telecom industry will look like any another free-market
industry," said Harrington. "Markets will become highly segmented,
marketing costs will increase, approaching those of popular branded
products in the retail sector, and the industry in general will have low
predictability."

Market segmentation trends are already apparent. Large incumbents are
grouping together to form consortia whose sole aim seems to be to serve
business customers. As experience has shown, this is where

the most revenue is at stake and this is where being big and global
counts. Low-revenue residential markets, on the other hand, are better
served by companies similar to the United Kingdom's cable franchisees.
Increasingly, residential markets will be served by cable operators or
similar companies whose accents are on a mix of telephony and
entertainment/interactive services.

In the business services area, the emphasis on network reliability and
end-to-end manageability will lead to the increased importance of owning
infrastructure. Leasing and interconnect arrangements will be the choice
of the typical reseller or small operator serving domestic or regional
markets. This is borne out in the case of WorldCom, which, in the early
days of its alliance with the long-distance operator LDDS, was more of a
reseller and rose to a $1 billion revenue level without owning significant
infrastructure.

WorldCom's strategy took a U-turn with its acquisition of MFS, a
company dedicated to high-revenue, city-centric business markets. As a
result, the international operator is estimated to spend about $50 billion
acquiring other telcos and building new fiber-based infrastructure. "We
are in the process of building the first, global broadband network," said
John Sidgmore, WorldCom's chief operating officer, adding that there is
more to come.

In fact, monumental changes are forecast in global backbones. The
commissioning of submarine cables and terrestrial fiber backbones over
the next two years will effectively quadruple the bandwidth on some
international routes and increase by an even greater factor on others.
Together with the fact that domestic services will benefit from the use of
SDH and other cost-efficient technologies, this is expected to lead to the
commoditization of voice. "We expect basic voice service tariffs to fall by
about 80 percent over the next five years," said Harrington. The billions
of minutes of voice traffic that will be carried on these networks by then
will mean that per-minute costs of plain old telephone service (POTS)
will effectively be zero.
This will emphasize the value-add that service
providers will have to offer in addition to basic packages.

Strategic Choices

These prophecies only help strengthen the view that incumbents will bear
the brunt of the market rationalization that is to come. Conservatively, if
the market grew by 5 percent per year and incumbents lost 4 percent of
their market share and costs rose by 3 percent during the same period,
the scenario for 1997 to 2002 looks frightening for the dominant
operator. In this simplistic experiment, a telco's earnings could decrease
by about 4 percent by the year 2002. "Bear in mind that BT is losing
about 10 percent of its residential customers to cable operators in their
franchise areas and this estimate starts to look very cautious indeed," said
Harrington.

In the business sector, too, incumbents will fare perhaps only marginally
better. Operators such as WorldCom and Colt are already a formidable
challenge for them. The degree of their vulnerability becomes apparent
when it is considered that one of Europe's largest operators, Deutsche
Telekom, relies on business customers--who represent just 4.5 percent
of its customer base--for 41.5 percent of its revenues. In fact, 18 percent
of its customers generate about 70 percent of total revenues.

More bad news for the incumbents is in the shape of recent research
done by the France-based OECD, which has found that tariff falls do
lead to higher usage but not enough to compensate lost revenue. During
the period 1985 to '95, when telecom tariffs fell by about 40 percent or
more, telecom revenue as a percentage of GDP remained more or less
constant in OECD countries. For the dominant carrier used to high
margins and protected markets, revenue loss is inevitable--perhaps even
just--and incumbents can, at best, only attempt to minimize losses.

"An incumbent's survival strategy should include writing-off old
technology and assets, increasing investment in new technology, and
trying to manage market share loss by growing into new, international
markets,"
said Harrington. Today the problem with most strategies on the
drawing boards is the failure to recognize that such revenue loss and
other associated changes are inevitable.

The Global and the Local

In the short term, Europe is expecting a flood of new operators and
investment, primarily from the United States and the Asia-Pacific region,
originating from a combination of private and public operators.
Medium-size operators who have achieved significant success, such as
Ameritech in the United States, are eyeing Europe as an obvious market.
Bell operating companies have expressed their frustration with the U.S.
Telecom Act's seemingly empty promise of access to long-distance
markets. "Many state-owned and formerly monopolistic operators in
Europe are privatizing and they will need our marketing skills," said
Ameritech's chairman and CEO Richard Notebaert. "Our investment in
Europe already has a book value of more than $5 billion."

Over time, the distinction between public and private operators will blur
and perhaps in the period after 2002, the industry is expected to rise to a
new exalted status of equilibrium with four to five global supercarriers
and between 2000 and 4000 domestic and regional operators. The
so-called supercarriers are expected to be the world's backbone
networks and the primary carriers while the smaller, second-level
operators provide domestic and feeder services. Does that mean the
medium-size operators that are not tied-in with one of the larger
consortia are an endangered species? Ameritech's Notebaert feels
otherwise: "Even though Ameritech may not be part of a global consortia,
there's always room for us to participate. No matter how big an operator
is, everywhere in the world is just too big."

Bhawani Shankar is managing editor of Telecommunications
International.

[TOP]




To: djane who wrote (47013)5/16/1998 5:21:00 AM
From: djane  Respond to of 61433
 
Frame Relay: Trends to Watch. High-speed frame relay provides a viable alternative to end users who are not ready to commit to ATM services

Mark Kaplan [NN], May 1998

telecoms-mag.com

One of the truisms of data communications is that the need for speed is
ever-increasing. People want things to happen faster than they did before
and that demand expands to the limit--and then beyond--the capabilities
of the available mechanism.

From its initial public service offering in 1991, frame relay has become
the data technology of choice for organizations around the world that
need to implement networks at speeds of T1/E1 and below. According
to the 1997 Frame Relay Market Study by Distributed Networking
Associates in Greensboro, N.C., there are now approximately 25,000
frame relay users worldwide with a total of about 270,000 ports in
production. Close to 70 percent of these ports are at the rate of 64 kbps
or below (DS0).

Bandwidth Drivers

Demand for increased bandwidth is emerging from both the residential
and business communities. New-generation applications such as
audio/video clips and interactive multimedia shopping and gaming sites
appearing on the Internet are gaining rapid acceptance with consumers.
We are in the midst of growth in both the density and volume of
applications, as well as geometric growth in the sheer numbers of people
accessing these sites. This will continue to grow as the new generation of
high-performance, low-cost PCs are snapped up--each with a 33k or a
56k modem--and these new users get on-line.

Inside the Internet, service providers are deploying high-speed
backbones to reduce congestion. Increasingly, users are demanding
56-kbps, T1, and even T3 connectivity for their Web servers. To keep
pace with these pressures, Internet service providers are turning to frame
relay to provide high-performance, cost-effective solutions to their
customers.

At the enterprise, the growing demand on corporate information systems
is creating the need for yet more bandwidth, especially in high-traffic
areas of the network. The richness of the content within the enterprise
network continues to consume all available bandwidth. As corporations
become geographically dispersed, users are increasingly less content with
WAN-speed bottlenecks and are looking for near-LAN speeds to all
information servers within the enterprise network. Additionally, the sheer
volume of data exchanged daily between major computer centers within
the enterprise is driving many corporations to look at trunk speeds in
excess of 2 Mbps.

In recognition of this market-driven need for greater capacities and
speeds, the Frame Relay Forum recently amended the User-to-Network
Interface (UNI) FRF1.1 and Network-to-Network Interface (NNI)
FRF 2.1 implementation agreements (IAs) to meet users' demands for
access speeds up to 45 Mbps (DS3). The Frame Relay Forum is
currently developing an implementation agreement for multilink frame
relay (MFR) to address needs for capacities between T1/E1 and T3/E3.
MFR is a software-defined means of inverse multiplexing several
low-speed links to act as a single higher speed link. For example, a site
requiring a 6-Mbps access link could tie together four T1 links (or three
E1s) to create a logical 6-Mbps line at a cost below that of installing a T3
access line, depending upon local tariff structures.

The major frame relay switch vendors currently support frame relay
access and trunk speeds up to and including DS3 (45 Mbps). 1997 saw
some vendors announce support for OC-3 frame relay, with higher
speed support expected to come to market in the near term. Multilink
support will rapidly follow the adoption of the standard.

ATM and Frame Relay

Most carriers are offering ATM (asynchronous transfer mode) services in
parallel with their frame relay services with connection points between the
two. In fact, some frame services are offered with a higher speed ATM
service as the core transport mechanism between the frame relay switch
points. It is important to understand the two types of interworking
available. The first is network interworking (NIW), which can be thought
of as encapsulation or tunneling frame relay frames through the ATM
network to interconnect two (or more) frame relay attached devices.
With NIW, the variable length frames are segmented and packaged into
the payload of the ATM cells without disturbing the frame header
information. The increase in overhead is offset by the higher switching
speeds and the larger trunks interconnecting the ATM switches.

If NIW can be thought of as tunneling, service interworking (SIW) can
be described as a translation service between frame relay and ATM.
Here, by mapping the frame header information into the ATM header, a
frame relay device can establish communications with an ATM device.

Together, NIW and SIW foster the coexistence of frame relay and ATM
and allow users to choose the technology that best meets the traffic
requirements and budget allowance of each site.

ATM vs. Frame Relay

Frame relay was initially designed to provide transport for the delay of
insensitive data sent by higher level applications capable of recovering
lost or dropped frames. Frame relay has no inherent frame correction
mechanism; errored frames are simply discarded and higher layer
protocols determine what frames need to be re-sent. Since the frames
vary in length and traverse the buffers and matrices of the switches, delay
across the network is of a non-determinative nature. While prioritization
mechanisms are sometimes employed to provide differential services, to
date there are no true quality of service (QoS) levels available from frame
relay networks based on standard metrics. The Frame Relay Forum and
ITU-T (International Telecommunications Union-Telecommunications
Standardization Sector) translation are attempting to address this issue
during the coming year.

In contrast, ATM was built from the ground up to provide differential
QoS levels (e.g., constant bit rate, variable bit rate, available bit rate,
unspecified bit rate) with consistent characteristics (at least for the
constant bit rate and real-time variable bit rate service levels). Utilizing a
fixed-length payload of 48 bytes and a 5-byte header, ATM switches are
able to provide highly determinative service classes each optimized for
specific types of data and applications.

The question then is which technology to use for a given network?

Frame relay is historically very good at transporting data which is not
highly dependent upon precise delivery intervals. Examples of this are
typical client-server database queries, e-mail and file transfers, and
broadcast video applications. ATM has typically been tagged as the
transport of choice for delay-sensitive information such as interactive
video and voice. However, recent enhancements to the basic frame relay
service have tended to blur some of the distinctions between the two.
FRF.11 has defined standards for carrying voice over frame relay.
FRF.12 has defined fragmentation issues to create a more determinative
delay pattern to the frames by chopping up the large data frames into
smaller pieces to better match the size of the voice frames; thus it is able
to minimize frame delay variation through the network.

So, the choice of which technology to use where depends on the
character of the predominant traffic. If your network will be transporting
a high degree of voice traffic and/or near-broadcast quality video, ATM
is most likely the better choice due to its ability to support constant bit
rate (CBR) traffic as well as inherent broader bandwidth (currently up to
OC-12, or 622 Mbps, with plans to extend to OC-48).

If the dominant traffic type is non-delay-sensitive data (which could be
voice, video, fax, imaging, multimedia, file transfer, or e-mail), frame relay
is a better choice even if bandwidth in the range of 45 Mbps is required.
Frame has less overhead than ATM, is more readily understood by most
networking professionals, and is easier to install, and frame equipment
and services are generally less expensive than ATM devices and
services.

Topology Drives Speed

Many frame relay adopters have deployed their networks in a star
topology similar to previous leased line networks. This has allowed users
to realize an immediate benefit of frame relay--primarily lower cost. In
this configuration, one location (typically the corporate computer site)
receives traffic from numerous branches or remote locations. Since a
large number of relatively lower speed locations are concentrated for
delivery to this site, the bandwidth required at the central site is set at
some percentage of the total bandwidth of all remote sites.

Initially, customers ordered 64-kbps access (DS0) for remote office
locations and 1.55 Mbps (DS1) for the headquarters site. When traffic
or the number of locations increased and signs of congestion became
apparent at the headquarters site, customers were relegated to ordering
another DS1 facility (either as a separate DS0 or as part of an NX64
service) from their service providers, which increased access cost and
required incremental hardware (ports) and customer premises equipment
(CPE). Now these same customers can obtain service at speeds up to
45 Mbps without requiring incremental hardware. The individual CPE
may need replacing or upgrading but additional devices are typically not
required.

The need to integrate or replace with ATM may not be required in this
type of network based on the mix of applications and services offered.
No gain would be realized by moving traditional or legacy applications to
an ATM network despite the promise of greater bandwidth than
available via high-speed frame relay.

High-speed frame relay provides a viable alternative to end users who
are not ready to commit to ATM services. Standards-based solutions
exist today for access at DS0, DS1, and DS3 rates. The Frame Relay
Forum is close to agreement for an IA for MFR for rates between T1
and DS3 in increments of either 1.544 Mbps (T1) or 2 Mbps (E1). With
support on the horizon for SONET rates beyond 155 Mbps, frame relay
insures that your investment in equipment and services can be maximized
into the future.

Mark Kaplan is the senior marketing manager for frame relay
products at Newbridge Networks Inc. He is also the chairman of the
Market Development and Education Committee of the Frame Relay
Forum. He can be reached at (703) 736-5792.

[TOP]