SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : EMC How high can it go? -- Ignore unavailable to you. Want to Upgrade?


To: JRI who wrote (7642)9/9/1999 11:08:00 PM
From: Jerryco  Read Replies (1) | Respond to of 17183
 
Racing Toward The Always-On Internet
By Todd Spangler, Inter@ctive Week
September 6, 1999 9:20 AM ET

Redundancy. Fault tolerance. Self-healing networks. Fail-over servers and hot-swappable disks.

This is the esoteric language of creating an industrial-strength Internet, reliable and able to ship huge amounts of data on demand at any time. Get used to them, because every business is becoming an Internet business; they're terms that executives must be as familiar with as "cash flow" and "net loss."

There are two distinct considerations when talking about Net dependability: One is the quality of the systems that run individual sites and networks; the other is, more broadly, the underlying fabric of the Internet itself.

During the next four years, the Internet's infrastructure as a whole will grow increasingly secure, stable and efficient. Experts predict a number of new technologies will be common by 2003 to fortify Internet networks and services.

Countless research projects are under way, quietly building the rivets for the next-generation Internet, ranging from low-level protocols to caching services, and from security advances to making networking equipment more scalable. The technologies include things like the next-generation Internet Protocol, version 6 (IPv6), which eventually will allow trillions of devices to be connected to the Net and ensure that time-sensitive content like video clips and voice conversations get top-priority service.

"The network becomes more secure, the stability improves dramatically and it all gets a lot faster," says Fred McClimans, chief executive of research firm Current Analysis. It's not all wine and roses, though. McClimans adds that, as a result of its increasing reach and importance, the Internet also is much more likely to become a target for terrorist activities.

But, for most e-commerce sites, the systems most crucial to running their businesses are within their control. And be forewarned: Net companies that don't keep up with the pace in the run to reliability will fall by the wayside.

Guess Who's Kaput.com?

Consider eBay. The star-crossed auction site lately has become synonymous with "Internet outages." In early August, eBay's site was down for 10 hours, and some portions were unusable for almost a full 24 hours. That was only the latest headache; in June, eBay suffered a 22-hour outage and then a 10-hour hiccup in mid-July.

EBay has felt the impact not just in lost potential revenue from customers to competing auction sites, but also in its stock price. Since its most recent spate of problems started in June, eBay stock fell from $182 to about $75 in mid-August - some of which could be attributed to the overall slump in Net stocks, but it also must reflect growing investor uneasiness with eBay's chronic blackouts.

So, eBay didn't want to waste any more time resuscitating its servers every few weeks, and it finally got serious about exterminating the glitches. After its second blackout, eBay appointed Maynard Webb, previously chief information officer at Gateway, to head its operations.

There's a telling back-story about eBay's woes. None of eBay's competitors are gloating about the problems it's been having - not even private, off-the-cuff jokes - because all the Internet engineers out there are aware that they are one trip-up away from disaster themselves, according to B.V. Jagadeesh, Exodus Communications' chief technology officer. Exodus, which hosts more than a thousand companies' Web sites in its worldwide data centers, provides facilities and bandwidth for eBay's servers.

"The CIOs of other companies aren't ridiculing eBay because they know it could happen to them at any time," he says.

The upshot is that Internet businesses need to treat staying online 24 hours per day as their top priority. But today, many Net companies are treating it as an afterthought, says David Cooperstein, research director, consumer e-commerce at Forrester Research.

"The infrastructure is what gets the short shrift at first. Companies think, 'If I'm successful at getting the business model off the ground and we become popular, I'll deal with it then,' " Cooperstein says. "But this isn't about the company file server being down. This is about the impact on revenues . . . and these are companies that are barely profitable."

It's expensive to stay "on" all the time. Forrester Research estimates that the cost of a "best effort" e-commerce site that provides 99.0 percent uptime would be roughly $3.2 million. For a "non-stop" e-commerce site that provides 99.999 percent uptime, the cost shoots up to $6.9 million.

Is it worth it to eliminate those few extra hours of dead air? Unquestionably, Cooperstein says: "Anybody who went out to build a factory would do everything they could to make sure it's safe. We should expect the same of e-commerce sites."

One of the biggest problems in trying to achieve reliable Internet service is that it's like trying to put brand-new tires on a moving car. It's costly enough to build systems that provide close to 100 percent uptime. What skews the equation further is the fact that the Net is expanding exponentially, and the massive increases in users and bandwidth make it extremely difficult to plan what capacity you will need for your systems even in six months - let alone in five years.

Furthermore, even if you have some idea about what kind of raw hardware and bandwidth is required, you might not be able to adequately test the whole setup to find out where the weak spots are.

"As soon as there is enough bandwidth, tomorrow you don't have enough. And as soon as you fix one problem, there's another one," says Bobby Johnson, president and CEO of Foundry Networks, which sells a line of switches for distributing traffic in a data center. "I think the Internet is a lot better today than three years ago. But it's going to be this continuous cycle for quite some time."

Companies typically have dealt with the problem of keeping pace with huge increases in demand by overprovisioning, which means they build their systems to be able to handle significantly more traffic than they actually expect to receive. But this is an inefficient proposition, because a site is spending money on "for-a-rainy-day" infrastructure that won't see immediate use.

"People manage their Web infrastructure by overbuilding," says Esmerelda Silva, networking research director at International Data Corp. "It's an option people understand."

Take, for example, Critical Path, a services company that hosts e-mail applications for Internet service providers (ISPs).

Critical Path recently installed brand-new storage equipment from EMC that will give Critical Path the ability to handle "tens of millions of mailboxes," says Matt Hartwell-Herrero, infrastructure product manager.

Today, Critical Path hosts 4.2 million mailboxes; the company's technical planners decided to put in a data storage architecture that would increase not only its raw capacity but add the ability to provide "hot standby" service for each mailbox it hosts.

"We're never going to reach a point where we say, 'OK, we're done,' " says Hartwell-Herrero. "We're always going to be working to make it more redundant." The next steps for Critical Path will be more of the same: bringing additional distributed sites online and adding still more capacity.

Providing true reliability is a challenge that pokes up in different places with different faces. There are potentially hundreds of areas in which Internet services could foul up, and merely keeping the lights on is not good enough.

In addition to ensuring that you have backed up all your data and your servers are still running, you need to ensure the system is still functioning the way it's supposed to, says Jacob Stein, director of strategic planning at database vendor Sybase.

"Where people sometimes make a mistake is that they assume that hardware redundancy is enough," Stein says. "The problem is slightly more complex than that. If you write a bad series of bits and bytes, then you have two copies of bad data."

Are We There Already?

It's true that there are myriad obstacles in keeping an e-commerce site afloat, but could it be that the current robustness of the Internet is underappreciated?

Some experts, such as Vinton Cerf, often called one of the fathers of the Internet, postulate that the Internet is an astonishingly scalable system that is - today - an industrial-strength grade capable of withstanding just about any kind of localized damage.

They point out that pundits used to predict that the Internet would fail at some point along its astronomical growth curve - a doomsday event most notoriously expounded by 3Com founder Bob Metcalfe - something that hasn't ever happened on a wide scale.

"If we didn't have an industrial-strength Internet today, there would not be billions of dollars of commerce going over it," says David S. Isenberg, principal of Isen.com, a telecommunications consulting business.

The key to Internet reliability, Isenberg says, is redundancy. While expensive, it's something that can be achieved today by any service provider or Net-connected business.

"Everyone is starting to get this message," he says. "Those of us who really depend on the Internet have two ISPs; it's just starting to be a way of life."

Measuring Internet reliability also presumes that users expect a certain level of service. The point of comparison shouldn't be literally 100 percent uptime, but what people are willing to endure in order to use the service, says Andrew Greenfield, a marketing executive at Cisco Systems' service provider division.

"What's an acceptable level of service for the Internet?" Greenfield asks. "Look at the cellular phone industry - people pay $100 per month for service that occasionally drops their calls and isn't always crystal-clear. People will put up with a lot if it delivers something useful."

However, Greenfield adds, there is plenty of room for improvement in Internet technology itself or in Web sites improving their own reliability.

Robust.net: What's Next?

So, what are some of the important technologies that will be the steel I-beams of tomorrow's Internet?

More bandwidth and more intelligence in how to treat that bandwidth are the hallmarks of Internet 2003.

New security technologies are required for the Net. First, because pervasive computing, where every digital or electronic device on your person and in your home is connected to the Web, requires equally pervasive security, and second, because increasing computing power is making the traditional forms of security obsolete.

"Security has got to be there. People have to be confident that the Internet is secure," says Nev Zunic, program manager at IBM's cryptographic research center.

Zunic led the team at IBM that developed a new encryption technology, which was submitted to the National Institute of Standards and Technology as a contender to become the Advanced Encryption Standard (AES).

Products that use the 128-bit AES will be orders of magnitude stronger than today's security products that use the 64-bit Digital Encryption Standard. Zunic provides this example: If DES' security is equivalent to the water in a teaspoon, AES is a whole swimming pool.

"I expect AES to last about 20 years," Zunic says.

A much larger supply bandwidth is another feature of the Internet in 2003. Hosting provider Exodus has seen its bandwidth demands increase 400 percent per year. At that rate, in four years' time, the Exodus network will be pumping 2,500 trillion bits per second, up from 4.3 billion bits per second today.

Carriers are quickly plowing new fiber across the U.S. and around the world, and Dense Wavelength Division Multiplexing technology is letting them cram more raw bandwidth over their existing lines - resulting in networks that will have gone metaphorically from the size of a straw to a 20-meter-diameter sewer main. Meanwhile, the demand will be coming from broadband cable, Digital Subscriber Line and wireless edge connections, which analysts expect to finally start hitting Mainstreet USA within four years.

To get there, the core of Internet backbone networks will require new "terabit-scale" routers, moving trillions of bits of data per second, with companies including Cisco and Nortel Networks, as well as start-ups like Juniper Networks, working on this problem. The equipment vendors are working without a net, so to speak: No networks anywhere have ever needed to run this fast.

But billions of additional bits per second of bandwidth alone won't necessarily provide better service if the access can't be parceled out more intelligently.

"Although the equipment is starting to get to the point where customers can turn on tons of bandwidth, throwing bandwidth at the problem is a very short-lived solution," says IDC's Silva. "You're going to run into congestion and packet loss."

By 2003, analysts expect Internet networks to have started to provide guarantees on individual bandwidth flows, enabling applications like reliable voice and video to run across the Internet's public infrastructure. Some optimistic projections have data networks running entirely IP-based infrastructures by 2002 with technologies like Multiprotocol Label Switching providing management capabilities that have been part of some other, less flexible technologies, like Asynchronous Transfer Mode.

In the crystal ball for the Internet, perhaps there's no technology less heralded today than caching servers, which store a copy of the desired content closer to the user.

Once Internet-connected networks widely deploy intelligent caching appliances, a class of products expected to explode in the next four years, the salubrious effects will be seen from both a user's perspective - faster response times - and from a bird's-eye view - smoothing out the Net's performance spikes.

"We believe the Internet as it's evolving isn't efficient for distributing content and that we need more layered solutions on top to distribute content intelligently," says Rangu Salgame, CEO of Edgix, a satellite-based intelligent content delivery start-up. "You can throw all the bandwidth in the world out there, but it won't improve response times. What intelligent caching does is take out the World Wide Wait as we know it - you're putting all your content one hop away from a user."

Further off in the distance is IPv6. The main benefit of the new protocol is that it expands the address space - allowing a virtually unlimited number of IP addresses, compared with a ceiling of 4 billion with today's IPv4. The new protocol is also smarter, with a built-in awareness of security - unlike IPv4, where security is tacked on later - and guarantees on bandwidth.

Robert Hinden, an Internet engineer at Nokia, who is co-chairman of the Internet Engineering Task Force group developing the protocol, sees IPv6 adoption as inevitable, and says that in the next four years or so it will slowly make its way into the Internet backbone.

"You end up with this very stagnant Internet if you don't move up to IPv6," Hinden says.

The Internet works well today for those who build their services and structure their sites for redundancy. And, with new technologies and higher-capacity systems coming down the superhighway, the Internet itself will work even better and let users do even more.

Eventually, even our most demanding expectations of performance will be met with Internet technologies, says Ellen Hancock, president and CEO of Exodus.

"I do believe that the Internet is maturing, but I would not call it mature," Hancock says. "It's just a matter of time before the levels of service and reliability we were used to with mainframes come to the Internet."

- Joe McGarvey contributed to this article.