SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Frank Coluccio Technology Forum - ASAP -- Ignore unavailable to you. Want to Upgrade?


To: Frank A. Coluccio who wrote (43)10/10/1999 8:58:00 PM
From: ftth  Read Replies (2) | Respond to of 1782
 
Network Pressure -- E-Commerce Has Made Network Availability The Highest Priority And Downtime More Costly Than Ever.

InformationWeek, August 16, 1999 p44

Author
Riggs, Brian; Thyfault, Mary E.

Summary
The rise in electronic commerce means system downtime can cause a company's stock price to fall, as
happened to online auction service eBay, or lead to a government investigation, as happened after
several online stock-trading systems failed. IT departments have many new tools and techniques at their
disposal to keep their organizations' critical systems running. Redundant servers, server farms and
communications links are combined with caching, load balancing and policy networking to protect
against system failures. News service CNN connects each of its two data centers to a different power
grid, and each center has connections through three local telephone companies and several
long-distance companies. CNN Internet Technologies VP Monty Mullig says the company could lose one
link and still get through an average peak. CNN's site has sufficient bandwidth and server capacity to
accommodate three times its average load, which is vital for handling peaks due to breaking news.
Barnesandnoble.com's use of multiple server farms, 1-800-Flowers.com's load balancing strategy, and
the approaches of other electronic commerce sites are described.

Full Text
Downtime is a dirty word for it managers. their goal in building sys- tems and networks has always been
to ensure that bandwidth and computing resources are available when needed. But the growth of
electronic commerce has raised the stakes. The penalty for system crashes and network failures is
greater than ever, and the loss of sales and customers is just the beginning. Now, downtime can result in
falling stock prices, as happened when eBay Inc.'s online auction site crashed in June, or in a government
probe, as when the Securities and Exchange Commission launched an investigation into a series of
crashes involving several online stock-trading systems.

"In the past, if a company's computer system went down for 10 minutes, that would not affect the worth
of its stock," says Norman Dee, director of network services at 1-800-Flowers Inc., which sells floral
arrangements and gifts over the Internet and operates an extranet for business partners. "But
corporations can no longer afford the embarrassment of their commerce servers not being available.
Because of this extraordinary exposure to doing business over the Internet and extranets, companies
such as ours have to ensure that there is constant availability."

To meet that requirement, IT managers are employing a host of new devices, techniques, network
designs, and services to improve the performance and availability of their commerce systems, networks,
and applications and reduce the likelihood that their sites won't be working when a potential customer
comes calling.

Risks And Rewards

The risks and the rewards have never been greater. Companies spend an average of $1 million to launch
an E-commerce site, with many costing between $6 million and $20 million, according to Forrester
Research. Whether they're selling products, services, information, or advertising, these companies are all trying to grab a piece of a rapidly growing pie. More than $18 billion worth of merchandise will be sold
via E-commerce in the United States this year, a figure that's expected to jump to nearly $53 billion in
2003, according to Forrester.

An obvious tactic for ensuring availability is redundancy-redundant servers, server farms, and
communications links. When redundancy is combined with other products and techniques such as
caching, load balancing, and policy networking, a company can make great strides to ensure its systems
and networks are always available, even if pieces should fail.

A prime example is the Web site operated by CNN, its global operation for news gathering and
distribution. CNN operates two data centers, each connected to a different power grid. Three local
telephone companies-MCI Metro, MFS, and BellSouth-have separate connections into each data center.
Long-distance companies also have high-speed links into each data center: two 155-Mbps pipes from
UUnet, one from Sprint, one from AT&T, and one on order from Qwest Communications, as well as a
45-Mbps circuit from GTE Internetworking.

"We could lose one and get through an average peak," says Monty Mullig, VP of CNN Internet
Technologies in Atlanta. CNN has designed its site, which generates advertising revenue, to have enough
bandwidth and server capacity to handle three times its average load. That's important for a site that
experiences unexpected spikes in traffic because of breaking news events, yet must always be available,
because news breaks around-the-clock.

CNN makes its news available over the Internet in many formats, including Web pages and streaming
audio and video. To get its content to users more quickly, CNN recently contracted with startup Akamai
Technologies Inc. to cache-or store-copies of its content on up to 600 servers located on the networks
of 20 Internet service providers. Akamai also monitors traffic on the Net and routes traffic over
communications links and to servers that are experiencing the least amount of congestion.

Because many of its customers want to hear audio reports or see video clips, CNN also uses aggregators
of streaming media to ensure that content is available when traffic spikes. During President Clinton's
impeachment trial earlier this year, requests for video downloads soared 100-fold. "It's tough enough to
build capacity that is three times our normal mode," Mullig says. "We couldn't deliver the video and serve
up our regular content."

CNN sends its video content to Intervu Inc. and its audio content to Broadcast.com Inc. Intervu has
servers located on the networks of Level 3 Communications Inc. and DBN Corp., and Broadcast.com has
servers on the Level 3 network, providing alternative paths to stream CNN's audio and video news
reports over the Internet.

That may not be enough. CNN is considering setting up a third data center at a carrier's site to improve
the site's performance and availability even more. But Mullig vows to keep most functions in-house. "We
think it's a competitive necessity to know the latest and greatest technology," he says. "Network
providers just know how to provide facilities, but they don't know how to engineer really big sites."

Mullig won't discuss the specific costs of running a multimillion-dollar Web site such as CNN's, but
analysts say the site is one of the best in making use of technology and techniques to ensure availability.
"They understood right up front everything about reliability, scalability, and the handling of events," says
Al Lill, an analyst with Gartner Group. "You can never throw enough money at these things."

Another approach to redundancy is multiple server farms. Barnesandnoble.com LLC, the online arm of
the nationwide bookstore chain, operates a cluster of commerce servers in New York and maintains
replicated servers at hosting facilities operated by America Online and Cable & Wireless plc.

Barnesandnoble.com uses Cisco Systems' DistributedDirector, a load-balancing system that starts at
$17,000, to direct traffic to commerce servers that can best process transactions and serve up requested
information, says Robert Dykman, VP of technical infrastructure. Load-balancing devices intelligently
route traffic to servers or server farms that have the most resources available to process the
transactions and keep traffic from hitting servers that are overly busy or that have failed.

Other load-balancing products work on the local level, routing traffic and maximizing server availability
within a server farm. The Motley Fool Inc., a provider of online investment information, installed a Big/ip
load-balancing appliance from F5 Inc. when it recently updated its commerce site. The $9,990 device
provides redundancy and fault tolerance and makes the commerce site easier to administer and
maintain, says Dwight Gibbs, Motley Fool's chief technologist. "If we want to do maintenance on one box,
we're able to take servers in and out of rotation easily," he adds.

Floral Balance

Load balancing can also help commerce sites that have different classes of customers. Take the
fast-growing 1-800-Flowers.com commerce site, which boosted the number of its commerce servers
from 30 to 80 last year to keep up with increased Internet traffic and transactions. In addition to
processing transactions from consumers, the servers also support an order-processing system called
BloomLink that lets affiliated florists process and fulfill orders as they come into the 1-800-Flowers.com
system. Business partners logged on to the BloomLink extranet can access their orders, download
information to their point-of-sale system, and process other database-intensive information.

To ensure that servers are available when business partners need them, 1-800-Flowers.com has
deployed load-balancing software from IPivot Inc. The software has improved the performance and
overall reliability of the extranet by balancing transactions and requests among multiple servers, says
director of network services Dee.

But the performance of servers accessed by consumers is more important, he says. "I can afford a few
minutes of downtime with my business partners because we're in business together-we're a family. But if
I'm not available for my consumers, they'll go somewhere else."

So Dee is trying a new tactic: offloading encryption processing from the commerce servers. Setting up a
secure, or encrypted, connection to process an E-commerce transaction is a common feature on Web
sites today. But establishing encrypted connections can hurt performance. Dee estimates that every time
a server uses HTTPS, a protocol for accessing a secure Web server, to process an encrypted transaction,
the server's processor is 10 times more active than when it's simply processing requests using HTTP.

To improve performance, 1-800-Flowers is using a Commerce Accelerator from IPivot to intercept
requests for secured connections before they reach the commerce server. The $12,995 device then
establishes the connection and encrypts the traffic using HTTPS or Secure Sockets Layer, another security
protocol, freeing up commerce server resources for processing transactions or providing requested
information.

Another technique for ensuring that certain customers get good service, even when a site is overwhelmed
with traffic, is policy networking. "Until now, we've been able to afford outages-we haven't liked them,
but we could handle them," says John Dodds, a senior systems administrator at Financialweb.com Inc. in
Orlando, Fla., which provides a variety of financial information on its Web site. That attitude changed in
July when the company launched an E-commerce application that pushes real-time stock quotes and
other financial data to paying subscribers. "Now it's absolutely critical to be up 100% of the time," Dodds
says. "We have some real-time traders as customers, and for some of them, just a five-second delay can
amount to 15% profit or 15% loss."

Dodds' goal is to ensure that paying customers have server resources available to them regardless of
how many nonpaying visitors are accessing information from Financialweb.com's systems. To
accomplish that, the company has implemented a policy-based network that can prioritize certain
customer traffic over others using "application-aware" switches from Alteon Inc. that recognize specific
types of traffic and applications. The switches give priority to all traffic that's encrypted, which comes
from paying customers who have secure connections between their PCs and the Financialweb.com
servers. In addition, subscribers to its Premium Professional services will have priority over other paying
subscribers, letting Financialweb.com set up multiple levels of service for different customers.

For some E-commerce site operators, speed is just as important as availability. Losing a customer
because a page takes too long to load is no different than losing one because a commerce site is
unavailable. "We're always trying to increase the speed of delivery of our content," says Motley Fool's
Gibbs.

The traditional-and costly-solution to that problem has been to add more bandwidth to the network. If a
56-Kbps line is overloaded, upgrade to a T1 (1.5-Mbps) line; if the T1 fills up, add another one or two,
or jump to a T3 (45-Mbps) connection. However, that approach can be expensive; a T1 line typically
costs around $1,500 a month, while a T3 line can cost $20,000 a month.

But more bandwidth doesn't solve all problems when it comes to E-commerce because bigger pipes
can't overcome performance bottlenecks caused by the Internet itself. "Much of the delay in getting
content to our customers is latency in the Internet," Gibbs says. "It just takes a long time to get through
all the router hops."

That's why some companies are trying to put frequently accessed data closer to users, especially
bandwidth-intensive content. For example, Infoseek Corp., an Internet information, search, and
commerce site, found that graphical data stored on its commerce servers was slowing the
online-purchasing process.

"Anything that's offered for sale on our site is going to have some kind of visual element to it," says
Infoseek CIO David Chamberlain. "Whether it's a bouquet of flowers or an antique dresser, customers are
going to want to look at images of it, comparedifferent sizes, and examine the front and back and
bottom."

Infoseek and companies such as the Motley Fool and CNN are putting graphical images on servers
distributed throughout a caching network maintained by Akamai or by competitors such as Sandpiper
Networks Inc. and Inktomi Corp. When an Infoseek customer is browsing merchandise, images of the
products will be provided by an Akamai server that's closest to the customer or by one that has the
clearest path to the customer. When the transaction is ready to be placed, the customer will be
connected directly to an Infoseek server. Chamberlain estimates that the technique boosts download
speeds by 20%.

"That way we can intelligently combine our in-house E-commerce application with Akamai's ability to
get images to people faster," he says. "Every time a customer wants something, our servers aren't going
to be tapped. By decentralizing part of the serving, we're making our bandwidth more available for other
things."

Motley Fool is using the same approach. Akamai's caching network stores and delivers to subscribers the
graphical and other bandwidth-intensive material while Motley Fool's servers provide customized
information and process transactions made on its FoolMart commerce site, which sells newsletters,
software, hats, T-shirts, golf balls, and other curios.

Trend Toward Outsourcing

Infoseek's and Motley Fool's use of a service provider to improve performance illustrates the growing
trend to outsource all or part of an E-commerce site. And a host of network operators offers services to
help companies build and manage commerce sites.

Many telecommunications carriers and service providers, such as DBN, Digex, Digital Island, Exodus
Communications, Frontier Communications, Level 3, Qwest, and USinternetworking, let companies
locate commerce servers on their networks. Many of these carriers will manage, and in some cases even
own, the servers and applications. They can also provide fail-safe facilities with redundant power
supplies and fireproof buildings.

Outsourcing E-commerce sites is starting to gain momentum. "We're getting more requests for
high-availability solutions than ever," says Mitch Ferro, director of product management for Internet
hosting at UUnet. "Customers are going to expect more and more high-end operations. They don't want
to be the next eBay."

For some companies, the main appeal of third-party service providers is convenience. Turnkey
commerce services "make a commerce server much easier to maintain," says Mark Barbier, director of
solutions development at ExecuTrain of Phoenix, a unit of nationwide software-training firm ExecuTrain
Corp. The Phoenix operation wants to sell its training classes and training books on the Internet while its
parent company is implementing a companywide E-commerce initiative. But Barbier doesn't want to set
up a server himself; he'd rather subscribe to Yahoo Shopping. "We have people here who could handle it,
but they're in classes eight hours a day," he says.

Doing More With Less

Companies with limited resources facing uncertain demand also find offerings from service providers
appealing. Liveprint.com Inc., a startup online printing company, just launched a Web site that lets
customers quickly create and print custom stationery, business cards, and other documents such as
restaurant menus.

Rick Steele, president and CEO of Liveprint.com, says he hired USinternetworking to operate the
company's commerce site for two reasons. He wants employees to focus on developing new content and
services, rather than on managing servers. Also, he was unsure how much traffic to expect. "I'm not
worried about getting two visitors-I'm worried about getting 20,000 visitors," Steele says.

That problem is in the hands of USinternetworking, which manages the site's hardware, software,
applications, and integration to back-end systems. And USinternetworking, which owns the hardware,
says it can double Liveprint.com's server capacity and triple its 1.5-Mbps network connection within four
hours without human intervention. In addition, USinternetworking stress-tested Liveprint.com's
applications, cutting two months off the company's time to market.

Other commerce-site managers echo Steele's comments. "Outsourcing frees up engineering resources to
improve the quality of the shopping experience on our site," says Tom Chow, VP of technical
development for Reel.com Inc., a movie information site that sells videotapes and DVDs. Reel.com
outsources its site to Exodus.

"We're not a technology company, we're a marketing company," says Steve Furst, CEO of NetGift Registry
Inc. In June, the Durham, N.C., company launched an online gift-registry service that connects customers
to more than 500 retail and charitable organizations. The site can handle 1.5 million members and
1,500 concurrent users.

NetGift signed a three-year, $2 million contract with USinternetworking that lets the service provider
take responsibility for systems hardware and the applications and databases running on six Sun Solaris
servers and EMC storage systems deployed in Annapolis, Md., and Milpitas, Calif. "This is the type of
agreement that enables us to say there is no finger-pointing here," Furst says.

One of the key advantages offered by service providers is experience-both good and bad. "They've been
burned, they've had bad experiences, and they've learned from those things," says Dermot Pope, systems
development manager for Talpx Inc., which created a lumber-exchange site for the $50 billion
softwood, lumber, and paneling industry.

The Chicago company's site uses a Web front end to bring together buyers and sellers and generate
purchase orders, invoices, and payments. But the company ran into problems getting its Web front end
to communicate with its database during the final phases of implementation. UUnet, Talpx's service
provider, brought in engineers who described similar problems other customers had experienced. That
information provided clues for Pope to find a solution. "They were doing more than fulfilling a contract,"
says David Adams, vice chairman of the lumber exchange. "They worked with us."

UUnet also manages Talpx's 10 servers, which are connected by 100-Mbps Fast Ethernet links, its
fault-tolerant infrastructure and load-balancing software, and its PeopleSoft application. Pope likes the
round-the-clock support provided by an outsourcer. "I don't want a million-dollar machine room sitting
in our Chicago offices," he says. "I want somebody watching those machines."

The Right Provider

While outsourcing can relieve many of the headaches involved in running an E-commerce site, the key is
finding the right service provider. BidCom Inc., which provides an online application for managing major
construction projects, says it once employed a carrier-which it would not identify-that was down 10% of
the time and lost packets of customer information. "They didn't see it as a big issue, and they wouldn't
elevate it," says Sal Chavez, co-founder and executive VP of the company.

When BidCom started to provide more services on its site, including the ability to buy materials, it turned
to the Web-hosting service offered by Digital Island. In addition to hosting and managing BidCom's
commerce site and data center, Digital Island helps the company's clients if they are having trouble
accessing the site. Digital Island can also provide detailed reports on who is accessing the site.

A bad service provider can be costly, says Todd Walrath, senior VP of online services for Weather.com,
the Weather Channel's Web site and one of the top-20 revenue-generating sites. Walrath says his
previous hosting service didn't focus enough on uptime, so he switched to Exodus.

"Exodus' entire company is organized to design better Internet connectivity and service to their hosting
customers," he says. Exodus offers a Java tool that lets site managers view the status of their servers from
any Web browser. Exodus also provides a caching service that can duplicate content on servers in
different regions, improving response time for a widely scattered group of users.

Exodus provides the Weather Channel with two to three times more capacity than it needs on an average
day, and can easily add more. The Weather Channel has about 6 million page views a day, but expects

30 million to 40 million views when hurricanes start hitting in the next few weeks. Says Walrath, "As
more people get on the Web, the price of being down for a day or even for a minute is becoming a really

big deal."



To: Frank A. Coluccio who wrote (43)10/10/1999 9:02:00 PM
From: ftth  Respond to of 1782
 
Now's the Time to Warm Up to Web Switches.
Data Communications, August 7, 1999 p17

Author
Lippis, Nick

Summary
Competition among competitive local exchange carriers (CLEC) is making WAN bandwidth cheaper, and
web servers are increasing in value as more businesses rely on them. The convergence of these trends
makes web switches more valuable components of enterprise networks. Web server farms are often an
amalgam of Intel-based PCs, each performing a specific network function, but second-generation web
sites need more efficient server architectures to provide the performance electronic commerce
applications require. Web switches will likely evolve much as routers evolved: from single-function
devices to integrated distributed processors with higher performance and easier management. Allaire,
IBM and Resonate are among the vendors offering software-only solutions for connecting web server
farms to the Internet, but they do not scale to large enterprises. Cabletron Systems, Cisco Systems,
Extreme Networks, Nortel Networks and other Layer 3 switch vendors will likely implement web
switching in their products. Leading web switch vendors are Alteon Websystems, Arrowpoint
Communications and Foundry Networks, with Alteon's 700 web switch a standout.

Full Text
Let's start with something simple. The Internet isn't going to stop growing anytime soon; it represents a
fundamental economic shift for the U.S. and countries around the world. So what does that mean to
network architects? The 'Net needs a special piece of equipment: I call it a Web switch. And now's the
time to get familiar with its functions. Consider a couple other trends. WAN bandwidth is getting cheaper
and cheaper, thanks to the more than 1,000 CLECs (competitive local-exchange carriers) now competing
for new customers. What's more, the value of Web servers is skyrocketing as they become essential
business drivers. Now lets add one and one together: If the price of bandwidth is going through the floor
and the importance of Web servers is going through the ceiling, Internet traffic-and Web
transactions-have only one place to go: up. Savvy network planners will start reviewing where particular
Web tasks are performed, making sure their company's business is free to scale. If they concentrate on
the space between WAN routing and Web servers, they'll be able to see where Web switches will prove
their value.

E-Everything

Before I divulge the details of Web switching, let's take a look at what's spurring this market on.
E-commerce concerns such as amazon.com, e-bay, e-toys, e-trade, Ticketmaster Online, and the rest
are too hot to handle. The rest of the world can't wait to get into some sort of virtual business.

Have you ever seen the typical Web server farm? It makes the bedroom of a 14-year-old hacker look tidy
by comparison. Before a packet can even reach a server, it has to traverse a router, bandwidth manager,
global load balancer, firewall and VPN server, local load balancer, and Layer 2/3 switch-which finally
passes it along to a cluster of Web servers, application servers, data servers, and backup storage.

That's a lot of devices along the way, and these days most of them are standalone. So for virtually every
one of these network tasks, there is an Intel-based PC performing the function. This approach may have
been fine for first-generation Web sites, but second-generation sites must lower the interference and
eliminate these separate management points.

How can I be so sure? Let's look at performance. If a Web server farm is connected to the Internet via T3
(45 Mbit/s), it could be called upon to service some 16,700 sessions per second. Most standalone
devices will choke on the load. In fact, as the number of connection requests per second increases, the
total number of successfully established sessions actually falls because they're being handled by a
single-processor architecture. That's why I expect Web switches to repeat the evolution of routers:
Single-function devices will give way to integrated distributed processing. Performance will climb while
management will become less complex, reducing associated risks.

Memory Lane

Remember, routers started out running on Unix servers. It wasn't too long before everyone figured out
that this approach wouldn't scale and that servers should be focused on applications rather than
counting packets. As routers grew more sophisticated, software upgrades and hardier hardware boosted
their abilities and ultimately separated forwarding from route calculation.

We already have several schemes for connecting Web server farms to the Internet. The first is a
software-only solution that handles most of the tasks mentioned above, from vendors like Allaire Corp.
(Cambridge, Mass.), IBM, and Resonate Inc. (Mountain View, Calif.). While their products may be fine for
small sites with one or two Web servers, they clearly will not scale to the needs of large enterprises. Then
there are the IP appliances-bandwidth managers, load balancers, firewalls, and VPN terminators. This is
a very fragmented market, and there's no single way to manage all these devices, which both increases
complexity and adds risk.

I also expect all the established Layer 3 switch vendors to implement these tasks on their products:
Cabletron Systems Inc. (Rochester, N.H.), Cisco Systems Inc. (San Jose, Calif.), Extreme Networks Inc.
(Santa Clara, Calif.), and Nortel Networks Corp. (Brampton, Ontario). They're going to take the
management module approach, and I happen to believe their performance will not be any better than
that of IP appliances.

Web Switch Roundup

Finally, there are the Web switch companies: Alteon Websystems Inc. (San Jose, Calif.), Arrowpoint
Communications Inc. (Westford, Mass.), and Foundry Networks Inc. (Sunnyvale, Calif.).

So what exactly are these wonder-workers? Web switches are single high-performance devices that
handle all the network tasks I discussed above. They're server-aware, ensuring optimum load
distribution across farms, noting availability and helping with performance management. They're
application-aware, providing state management for Web sessions and apps alike-be they TCP, UDP
(user datagram protocol), SSL (secure sockets layer), or even shopping-cart transactions. They're
content-aware, parsing sessions and providing efficient and granular load distribution. And they're
network-aware, supporting key routing protocols and WAN and LAN interfaces.

Of all the companies in this space, Alteon Websystems is the standout.

It has built all the requisite network-ing tasks into its 700 Web switch from the ground up, rather than
placing them on a management module with questionable scalability. The 700 exploits ASICs
(application-specific integrated circuits) to cast into silicon repetitive networking tasks like address
filtering; session mapping and forwarding; TCP, UDP, and IP state tracking; load balancing; bandwidth
management; and much, much more. These ASICs are riding on every module and linked via a
high-speed switching fabric that can keep pace with the e-business of Web sites such as Exodus, Yahoo,
and Uunet's hosting service.

These days, everyone out there is eager to get in on the next big Internet thing, hoping to make a killing
on an IPO. Network architects looking for a really big deal should start checking out Web switches. You
may not get rich, but your company certainly stands to profit. Good hunting.



To: Frank A. Coluccio who wrote (43)10/10/1999 9:05:00 PM
From: ftth  Respond to of 1782
 
Distributing the load: distributed load balancing in server farms

Network World, August 6, 1999 pNA

Author
Gibbs, Dwight


Since I wrote about local load balancing last week, it seems only natural that this week's column takes the
next step and focuses on distributed load balancing.

First, let's define the term. Distributed load balancing involves using two or more geographically
dispersed server farms to serve content. The idea is to have some or all of the same content at each
farm. Why would anyone do this? There are three main reasons:

Customer Experience: Ideally this will place the content closer to the customer. The goal is to route
customer requests to the server farm nearest to the customer. This will, in theory, provide quicker access
and, therefore, a better customer experience. Sounds great, no?

Redundancy: Another reason for doing the distributed load-balancing gig is redundancy. In the event of a
catastrophic failure at one server farm, all traffic would be rerouted to another farm.

Scalability: Using distributed load balancing, you can bring servers at different locations online to handle
increased loads.

In the old days (18 to 24 months ago), the tools to facilitate distributed load balancing were, quite
frankly, pretty bad. Cisco had their Distributed Director. It worked. The problem was that it relied solely
on Border Gateway Protocol (BGP). The result is that it would send customers to the server farm that was
the fewest hops away from the customer's ISP. It did not factor congestion into the equation. The result is
that customers could be sent over router hops that were totally overloaded when they should have been
sent over more, less-taxed hops.

Another offering from the dark ages was Global Center's ExpressLane product. It was a good idea,
incorporating trace routes to measure congestion. The Motley Fool used it. When it worked, it worked
pretty well. When it did not work, our site was completely down. It was a good idea but was eventually
killed as Global Center is an ISP not a software shop.

In the past year, several companies have made great strides in the distributed load balancing market.
RadWare has its WSD-DS product (http://www.radware.com/www/wsd_ds.htm). F5 has a 3DNS product
(http://www.f5.com/3dns/index.html). Cisco still has Distributed Director
(http://www.cisco.com/warp/public/cc/cisco/mkt/scale/distr/index.shtml). GTE/BBN acquired Genuity
and the Hopscotch product (http://www.bbn.com/groups/ea/performance/traffic_dist.htm). Do these
products work? Probably. I have not used any of them. In fact, I think they are quickly becoming
completely irrelevant. Now before you tell me to put the crack pipe down, hear me out.

As I see it, there are two types of Web pages: static and customized. Static pages do not change after they
are published to a site, thus the name. The same page goes to every customer who requests it. As the
name suggests, customized pages can change for every single customer. A CGI or ASP script may be used
to grab information from a database and insert it into a page before sending it to a customer. What does
this have to do with anything?

If you host mostly static content, it does not make sense to use distributed server farms. I think it makes
much more sense to maintain a single server farm and use a caching service such as Akamai
(http://www.akamai.com/home.shtml) or Sandpiper (http://www.sandpiper.com/). These services have
LOTS of servers around the Internet. Their customers are essentially relying on their distributed load
balancing to achieve better performance, redundancy and scalability. This becomes even more attractive
when you consider that your hardware needs will be much lower than if you served every single page
yourself. Less hardware means fewer headaches. I don't know about you, but I could certainly do with
fewer hassles. It sounds good in theory. Does it work in practice? I think so.

We use Akamai to serve static content on the Fool site. The only completely static files we have are our
graphics. They are the same for every single customer. While we have seen some glitches with the Akamai
system, overall I have been pretty pleased. The load on our servers is reduced. Our bandwidth usage is
also reduced (graphics are the bulk of data transferred). And the site feels faster to our customers. The
cost savings from the decrease in bandwidth and server resources do not completely offset the cost of the
Akamai service. However, when I factor in the better customer experience and fewer technical headaches,
tough to quantify though they are, I think Akamai more than pays for itself. While I have not used
Sandpiper, I have talked to their CTO and several of their techs. It sounds pretty interesting. All that said,
the use of these caching services is not without problems.

The main problem with caching has been usage tracking. To get around that, you can put a tiny graphic
on the footer of every page that you will not cache. This graphic will be called for every page served. Since
your Web servers will cache the graphic, the load on the boxes should not be too great. Since the graphic
is small, the bandwidth requirements should be minimal. Ad serving should not be a problem, as the
graphic file download just described will tell you how many page views you received. Click-throughs on
the ad banners will still hit your ad server. There are other issues: expiration and honoring of TTLs and
maintaining control among them. However, I think the benefits far outweigh the costs.

In a nutshell, if you are serving static content, I think it makes much more sense to forget about doing the
distributed load balancing yourself and let someone else worry about the distributed sites. Akamai and
Sandpiper have better distribution than you will be able to achieve anyway. By working with a service such
as these, you can achieve redundancy, scalability, and a better customer experience with minimal pain,
anguish and gnashing of teeth. The cost of this kind of caching is also significantly less than the cost of
maintaining your own servers in numerous networks. What about dynamic content?

Does it make sense to use distributed sites if you serve dynamic content? The answer is, "Maybe." If you
don't make extensive use of databases, distributed sites may make sense. If you can handle the site
management and costs, and if speed & reliability are crucial to your business, using distributed sites
makes sense. However, if you make extensive use of databases, particularly inserts and updates, you
probably do not want to use distributed sites. Why is this?

In one sentence: Two-way, near real-time database replication over the Internet is a pain in the butt, if
not impossible. Database replication can be a PITA as it is. Place one database cluster in San Francisco
and another in New York and replication REALLY gets painful.

We actually tried to make this work for the Fool. We had distributed sites and hoped to use a database
server at each site. After getting into the replication project, we decided it was not worth the effort. There
was one huge problem we could not get around. Suppose a customer changes information in a database
on a server in New York. Something happens and that customer is bumped to the servers in San
Francisco. She changes information there before any replication can happen. What do you do? You have
the same record in the same table on two different servers with different values in the fields. How do you
reconcile that? We could never come up with a satisfactory way to handle this. We came up with some
kludges, but nothing worth acting on. So we consolidated all database activity at one server farm. As our
site became more and more dynamic, the traffic to the nondatabase farm dropped off to nothing. Finally
it did not make good financial sense to maintain both sites. So we closed the nondatabase farm.

If you use a database for your site and it is primarily read only, then it is much easier to do distributed
load balancing. In this model, you have one publisher and several subscribers. You can simply publish
the database every X hours to make this work. If speed and reliability are crucial, distributed load
balancing in this scenario may make sense.

I think that if you are serving static content, it makes much more sense to use a caching network like
Akamai or Sandpiper than to do distributed load balancing yourself.



To: Frank A. Coluccio who wrote (43)10/10/1999 9:11:00 PM
From: ftth  Read Replies (1) | Respond to of 1782
 
Web-based 'farms' let design teams plow ahead.
Electronic Engineering Times, June 21, 1999 p92

Author
Lee, Dwayne

Full Text
Network-centric computing farms -which are also known as server farms or ranches-have emerged as
an effective method for boosting productivity for large design teams. For example, Sun Microsystems
has created computing ranches for IC design that serve as scalable computational resources for true
24-hour, seven-day-a-week global engineering.

As design groups become more dispersed and more engineers work outside their offices, it is only
natural that those server ranches should be accessed from the Web. Recognizing the benefits of such a
situation, Sun has implemented a Web-based interface for its server ranches that enables both users and
support personnel to interact with the ranches through a Web-based browser.

To appreciate this marriage of Sun's server ranches with the Web, it is first important to understand what
a server ranch is and how it operates. Simply put, Sun's ranches deliver the computing muscle necessary
for creating the company's next-generation processor designs. They pool a staggering amount of
computer resources, making them available to hundreds of integrated-circuit designers within the
company.

For example, the computing ranch used for the microprocessor design group includes 750
multiprocessor UltraSparc systems, approximately 2,500 UltraSparc CPUs, more than 1 Tbyte of physical
RAM and around 26 Tbytes of disk space. That mind-boggling array of resources is linked with
100-Mbyte Fast Ethernet switched networking throughout the ranch and to the desktops on Sun
campuses.

The ranch provides the advanced computational power needed by Sun's microprocessor and server
design teams. It performs more than a billion cycles of simulation in a given week. To maximize its
usage, it is simultaneously used by 15 to 20 projects at different points in the design cycle, supporting
roughly 600 to 800 designers and up to 200 different design and simulation applications, including
approximately 50 mainstream EDA applications. By leveraging network computing, centralized
administration and significant automation, the support staff for that ranch is low, averaging just one
administrator for every 100 systems.

Keeping data home

Besides providing formidable computing resources, the ranch also houses the most valuable asset Sun
has- the design data itself. Keeping the design database within the confines of the ranch considerably
simplifies version control and the tracking of changes. It also is much easier to back up and protect
critical design data from one central location and to provide effective security than if the information is
dispersed across a variety of computing resources residing in different locations.

To ensure the best resources are available to the design team, the latest, fastest computing hardware is
reserved for the ranch. Older equipment is rotated out onto the designers' desktops. That arrangement
works well, since the designers rely on the ranch for all their computational-intensive tasks and
therefore don't require as much computing muscle on the desktop.

To be effective, a ranch must be considerably more than the sum of the individual servers. Sophisticated
network computing techniques and significant amounts of automation are necessary to ensure that the
hundreds of users have easy access to the resources when they need them.

Designers should not be bothered with the details of what server to use. Nor should they have to chase
failed jobs and track system-administration matters. Such tasks should be executed in the background
so that designers can focus on their primary responsibility: designing.

The server ranches at Sun rely on custom resource-sharing software, called Dream (Distributed Resource
Allocation Manager), that creates a seamless interface between the ranch and the designers. The
software enables optimal use of the computing capabilities in the ranch without burdening the designer
with the task of managing the resources. The software ensures that if a job starts, it can finish-that the
job won't run out of memory, disk space, swap space or licenses.

The software also tries to meet the specific needs of the individual designers. Users may provide specific
criteria with their jobs, including priority and the required EDA software. The resource-sharing software
then matches submitted jobs with appropriate servers in the ranch and schedules the user's job for
execution.

The software continually tracks all jobs under its control and is even able to restart them automatically in
the event of failure. Users can check into the status of their jobs as they run. They are notified upon
completion.

Thanks to that resource-sharing software, the server ranches at Sun approach 100 percent utilization.
That high utilization within the server ranch is achieved by a steady stream of medium-to-large batch
jobs 24 hours a day, organized and tracked by the resource-sharing software. As a result, in the past
year the Sun server ranch for microprocessor design has averaged close to a million jobs a month,
consuming approximately 800,000 CPU hours. At the same time, Sun has managed to keep average
queuing time for short jobs to around three minutes.

That efficient use of hardware and EDA software saves Sun millions of dollars a year, enabling the
company to continually expand and upgrade the ranch to meet next-generation design needs. Just as
important, it has resulted in better designs. Being able to easily submit jobs means that designers don't
think twice about running extra tests, which help them to more easily visualize trends and determine
design parameters.

Access is key to success

One of the critical factors in the success of Sun's server ranches is the high degree of access designers
have. But this was no easy accomplishment. Like most high-technology companies, Sun has
geographically dispersed design groups that work at different times around the clock. But those different
design groups don't have the usual localized computing resources; all share the same server ranch.

To justify that centralized resource and fully exploit its power 24 hours a day, seven days a week, it must
be available to any and all Sun designers at any time from anywhere. The best way to do that is with a
Web browser, which is exactly what Sun did.

A Web-based browser interface is a fast, intuitive approach that is immediately understood by most
users. With a browser interface via the Web, a designer can be anywhere in the corporation or halfway
around the world and use the ranch's resources as if via a dedicated line. What could be easier than
simply going into a Web browser, typing in the URL and being instantly connected to the server ranch?

That browser-based Web interface to the ranch was created several years ago for use within Sun
Microsystems' intranet. Just last year, in part due to the maturing state of Web security technology, it was
opened up to Internet use for one of Sun's next-generation microprocessor design groups. The major
impetus behind creating the Web interface was to give designers one place where they could go for the
latest information on their design.

Let's examine how a typical microprocessor designer would use the server ranch via the Web browser.
Say this designer is working from home late one evening and is worried about a troublesome race
condition in the clock distribution. After successfully entering the Web site with a valid password, she
pulls the latest release of the microprocessor design from the ranch. As this information is being made
available via the Web, the designer is assured that she is accessing the most current design data.

Now, let's say our designer examines the clock circuitry and suddenly realizes that her verification test
had some inappropriate timing variables that could have inadvertently injected the race condition. She
changes the variables and resubmits the verification of the clocking circuit to the ranch via the Web. The
site informs her that the job is accepted and that it will keep her posted on the status of the verification
task. The next morning, the Web page tells her that the verification job has passed. The results are
displayed and, to the designer's relief, the race condition has disappeared. But now there seems to be a
problem with the delay circuitry in one of the branches of the clock tree. With a sigh, she starts creating a
new suite of verification jobs that should help get to the source of that new dilemma.

As you can see, the Web interface to the server ranch greatly facilitates the use of this powerful resource.
Without direct access, our designer would have had to wait until she got to the office to explore the
problem.

Support made easy

The ease of access offered by the Web also greatly benefits the system administrator's support team in
maintaining, troubleshooting and monitoring the ranch. The ability, day or night, to observe all the
relevant details on server-ranch functioning enables support personnel to keep the ranch operating at an
optimal level all the time. And the Web-based interface is an important tool for keeping support
engineering informed, in real-time, concerning vital details about the server-ranch operation.

Most of the administrative tasks can be conducted over the Web. For example, system administrators
can install operating systems and administer patches through the Web site. They also can monitor and
manage activities and system status. They can even access database servers that assist in managing the
server ranch.

The administrators can monitor the ranch at a glance, seeing immediately when critical systems are
compromised or down. The graphic Web site gives multiple views into the network management
database so that various aspects of the ranch can be monitored simultaneously.

Besides keeping things running smoothly, the system administrators are responsible for the long-term
viability of that important resource. Automated tracking and reporting let them continually evaluate the
performance and configuration of the server ranch and justify current and future expenditures.

In Sun's ranches, important statistics and usage information are compiled and summarized. Much of this
data is presented dynamically on the Web site, both for administrators and designers and in monthly
reports that summarize the data graphically.

Internal and external security were of the utmost importance when this portal was devised. It is vital to
keep the details of these next-generation microprocessor designs completely protected. Sun took
aggressive steps to ensure that only authorized personnel could gain access.

For internal security, Sun built an access database that incorporates information both from human
resources on the division level and from the corporate-wide network access database. That way, people
from the microelectronics and other divisions can access information and the server ranch. This access
database is updated daily to ensure that any and all employee changes are immediately reflected.

To link the server ranch exterior to the corporate intranet, Sun issues token cards to employees granted
external access. The user activates the token card by typing in a password. If all is correct, he will be
connected to the server ranch via Sun.net, which provides a secure private connection over the Internet.
Sun.net builds a secure tunnel that encrypts all the data moving across the connection, in essence
creating a protected causeway from the browser to Sun's intranet.

Already, almost all Sun users exclusively utilize the internal Web interface to access the server ranch
when working on campus. External access will be expanded to accommodate all follow-on processor
designs.



To: Frank A. Coluccio who wrote (43)10/10/1999 9:12:00 PM
From: ftth  Respond to of 1782
 
Balancing server loads globally.

Internet World, June 7, 1999 v5 i21 p27

Author
Phifer, Lisa

Summary
Distributing data worldwide for the Web typically relies on load balancing thatrrangements that involve
regional work centers. Content providers distribute their data repositories nationally and internationally.
Multinational organizations need to provide fast, consistent response times, without having to mirror entire
Web sites. Online configurations need to be transparent for users, and no amount of downtime is regarded as
acceptable.

Full Text
Bringing data closer to users means faster access times, less downtime, and greater manageability

DATA, LIKE POLITICS, is local, since it has to end up somewhere. And while the Internet's distributed nature
means that the physical location is hidden from end users, all things being equal, it still takes more time to
bring data that's farther away than it does to get at local data.

But distributing data worldwide for the World Wide Web is a way to provide faster service to targeted
populations, localization, and more robust service.

Server load balancing is nearly as old as the Web itself, with roots in the round-robin DNS once used to
distribute HTTP requests evenly across a pool of servers. By 1997, the first generation of load balancing
products began to emerge, offering algorithms to better utilize Web servers at a single location. Products like
Cisco's Local Director, F5 Network's BIG/ip, and Radware's Web Server Director were deployed in front of
server farms, providing one "virtual IP" address for the entire site.

Packets arriving at this virtual address were forwarded to "best" destinations using such metrics as server
availability, response time, and user-defined weights. These solutions allowed server farms to scale
transparently and to become resilient to single-server outages.

But load balancing from a single server farm still leaves a site at the mercy of every connection leading to it.
It's like building the perfect store served by a single road.

This is why real-time, transaction-intensive sites such as E*Trade now involve more than one regional work
center. Content providers like USA Today distribute data repositories nationally and internationally.
Companies that operate globally want to provide consistent response time to visitors anywhere from Bangkok
to Boston, without having to mirror entire Web sites.

"You've got to make it transparent for users to get to the closest content--a single site is no longer an
acceptable way of doing business, and the notion of acceptable downtime is going away," said John Stewart,
director of systems engineering and security at Digital Island, a high-speed overlay network service provider
with four international data centers.

Nearly every local load balancing product now sports a global counterpart or add-on. But considerable
diversity exists in this rapidly emerging market.

The granddaddy in this arena is Cisco Systems Inc.'s Distributed Director, which turns Cisco 2500 and 4700
routers into global load balancers. Companies like Digital Island deploy a pair of Distributed Directors for
redundancy, supported by a Local Director at every data center.

Stewart prefers Cisco's approach "because it understands the network layer, and is engineered from the
bottom up, instead of top-down." Distributed Director calculates network proximity by querying routers for
BGP and IGP route info, then combines round-trip latency, server up/down status, and administrative input to
select the "best" server. Two modes of operation can be used: HTTP redirection, by returning a "302
Temporarily Moved" response, or redirection of any application using DNS resolution.

Distributed Director is a relatively mature, stable product that leverages finely tuned network layer distance
metrics. But it requires Cisco routers at every site, plus BGP peering, and it doesn't take into account current
server workload.

Radware Inc.'s Web Server Director for Network Proximity takes the appliance approach, providing a
dedicated box for load balancing. WSD-NP, like Cisco's Distributed Director, also supports HTJ'P and DNS
redirection, but adds a third method, called Triangulation, whereby one box redirects traffic to another. The
second WSD acts as a proxy, returning responses directly to the requesting client. Radware designed
Triangulation as a high-throughput any-protocol alternative, because DNS redirection works well only if the
DNS server is geographically close to the client-a particularly bad assumption for road warriors.

"Our job as a vendor is to provide flexibility--no two clients think alike," said Hooman Beheshti, Radware's
chief technical officer. "We allow customers to choose the role of each WSD, redirection method, algorithm
metrics, and failover configuration." Any WSD Pro can be upgraded to an NP; NPs can perform both local and
geographic load balancing within a single box, and redundant NPs can share the balancing workload.

Radware and F5 Networks Inc. both offer local balancers that measure server workload using such metrics as
the number of open connections, fastest response time, number of successful requests, and packet
throughput. But F5's BIG/ip adds content awareness to the mix. For example, it can redirect around "404
Object Not Found" messages that might otherwise be interpreted as fast, successful responses. Extended
Content and Application Verification tools allow entire transactions to be emulated, with test results factored
into an "Internet quality of service" algorithm.

ENSO, a BIG/ip user that distributes audio clips to customers like Tower Records, deployed F5's 3DNS to
prevent unacceptably high packet loss over WAN links. F5 argues that dedicated, specialized hardware is
necessary to sustain reliably high throughput under stress. "Ninety-five percent of our sales involve
high-availability configurations; this underscores our belief that these solutions must not become a single
point of failure," said F5 director of product management Dan Matte.

Coyote Point Systems Inc.'s chief engineer, Bill Kish, agreed. "Disaster recovery is the primary issue pushing
geographic load balancing," he said. Coyote Point's Equalizer, he said, has attracted an e-commerce customer
base that will probably want Envoy, an inexpensive geographic balancing add-on that starts at just $2,500
per site.

Coyote Point customer IMDb (Internet Movie Database) has been using Envoy to serve images from the nearest
U.S. or U.K. site for six months. "Our sites had no way of covering for each other until Envoy came along," said
Jake Dias, IMDb's systems manager. "We are now able to offer quick service to all users, wherever they are.
Any site can go down and nobody will even notice."

Server load balancing modules are also available for "wire speed" switches like Alteon's ACEdirector, Foundry
Networks' ServelIron, and ArrowPoint's Content Smart Web Switch, These products are designed to move LAN
traffic via high-density Fast and Gigabit Ethernet ports and ASIC-based Layer 2/Layer 3 switching. Layer 4
redirection software has been added to support various applications, from load balancing to Web caching.

For example, Alteon's WebOS global server load balancing option allows ACEdirector switches to redirect
traffic based on server health, proximity, and response time. Foundry's Internet IronWare option supports
global server load balancing as well.

Where do these switching products fit? Mike Long, vice president of marketing and technology at Radware,
predicts that switches will eventually subsume the local load balancing market, while special-purpose
balancing products will reign in the distributed market and in LANs where intelligence takes precedence over
speed.

What other innovations are we likely to see in the next generation of load balancing products? Content
awareness will continue to grow, spurring products that understand how enterprise applications behave
end-to-end. An example of this trend can be seen in Resonate's Central Dispatch, a load balancing product
that evaluates the health of not only the target Web server, but also the back-end server required to satisfy an
incoming HTTP request.

Sri Chaganty, vice president of engineering at HolonTech, predicts that switch vendors will consolidate
value-added functions, such as quality of service rate-shaping, bandwidth management, and other
access-layer services, while embedding basic load balancing in ASICs. Some switches may become more
tightly coupled with the server farms they front-end, embracing new operating-system load balancing
features such as Microsoft's Clustering Services.

Today's two-tiered products rely on proprietary communication between global and local balancers to
determine proximity, network, server farm, and host performance. Best-of-breed multivendor combos
pairing high-speed LAN switches with intelligent software load balancers will require industry cooperation
and partnership. But this awaits a number of new ideas the players are still investigating.

Coyote Point's Kish thinks there's another step that will eventually become critical: to proactively push content
where it's needed, before it's requested, Radware's Long hypothesized that "reverse proxy caches" might bring
content closer to users by augmenting or replacing mirrored sites with cached content. To exploit these
resources, load balancers must become smart enough to differentiate between cachable and non-cachable
content requests.

If enterprises adopt global load balancing to provide bulletproof 24-by-7 network presence, the load
balancers must themselves be rocksolid and secure. As this market matures, high-availability configurations
deployed in redundant pairs may become the norm, and greater emphasis will be placed on security. The
more sophisticated customers will demand management tools that help them evaluate traffic, predict growth,
and tune policies for optimal performance, while customers at the lower end of the market will demand
self-tuning turnkey "appliances" that can be dropped into a network with minimal fuss.



To: Frank A. Coluccio who wrote (43)10/10/1999 9:18:00 PM
From: ftth  Read Replies (1) | Respond to of 1782
 
Sorry for the "data bombardment," but I posted them all within the 15 minute window so I couldn't chain the links.

They're all pretty long, so best to print them out and read them at your leisure.

dh