SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Frank Coluccio Technology Forum - ASAP -- Ignore unavailable to you. Want to Upgrade?


To: Frank A. Coluccio who wrote (84)10/24/1999 7:33:00 PM
From: CanynGirl  Read Replies (1) | Respond to of 1782
 
Sounds like they can either 'pre-populate' AKAM's servers or done on a per request basis.

October 1, 1999
A NETWORK'S ANATOMY|Akamai Technologies
By Sarah L. Roberts-Witt

If content is king, Akamai Technologies wants to become the official online court messenger.

Cambridge, Mass., newcomer Akamai--like competitors such as SandPiper Networks and Mirror Image--makes a business trying to speed content between server and user. Part of the emerging caching, or content-distribution, market [see "The Caching Question," Sept. 15, p. 72], Akamai's solution for balky Internet performance is simple: Put the content--particularly the bulky, slow-to-download content--closer to the end users who want it. Akamai does this by placing copies of large files like GIFs and JPEGs on servers scattered throughout the Internet in various provider networks and data centers. Internet traffic research firm the HTRC Group estimates that content distribution will grow to a $2.2 billion market by 2002.

"We've found that as much as 70 percent of a Web page's bytes are these embedded objects like image files," says Kieran Taylor, senior product manager at Akamai. "[We] put those files where they need to be, which is as close to the user as possible." Currently, Akamai has 900 servers in 25 service provider networks in 15 countries around the world.

Founded in September 1998 by a group of MIT scientists and mathematicians, Akamai and its FreeFlow service have garnered several heavy-duty endorsements--even if its financials look like a typical early-stage startup. Its customers include Yahoo, CNN, Apple Computer, and InfoSeek Corp. Akamai has received more than $43 million in funding from several venture capital firms, including Baker Communications Fund, the TCW Group, and Polaris Venture Partners. And in mid-August, Cisco Systems made a $49 million investment. Filings for Akamai's imminent IPO show revenue through June 30 was $404,000, while operating expenses were approximately $10 million.

The Network When a business decides to use Akamai's FreeFlow service for its Web site, it gets a copy of the FreeFlow Launcher software. FreeFlow Launcher goes through a Web site, identifies objects that can be served via Akamai's network, and tags them with an Akamai resource locator, or "ARL." Later, when a user requests the page, the ARL tells the requesting browser to pull the tagged object from one of Akamai's servers instead of the original source server. This approach ensures that the first hit always goes to the origin server so that an accurate hit tally can be kept and cookies can be placed in the browser.

Web site operators can prepopulate Akamai's network with files of their choosing. Alternatively, and more commonly, the first request (after Akamai is on the job) for a tagged object will pull a copy of the file down to an Akamai server, and subsequent requests will facilitate the copying of the file throughout the network. To update objects, site operators simply alter or change the already tagged files on their servers, and the previously described process of file propagation starts again. In addition, as an object ages, it eventually is deleted from the Akamai servers.

But the real magic of FreeFlow lies in Akamai's dynamically generated network maps. These maps, updated as often as every 20 seconds via a proprietary form of DNS, help FreeFlow determine the most efficient path to fill user requests. Decisions are based on a user's location, the condition of the Internet, and the optimal FreeFlow server. Homegrown agents comb the Internet, assessing server and network availability, bottlenecks, and outages to generate this information.

Each of Akamai's 900 servers has two 100Base-T interfaces--one for connectivity to the Internet and one to the local provider network. According to Taylor, Akamai's network currently has approximately 12 Gbps of capacity. "We like to keep 50 to 60 percent open capacity, which allows us to handle special events, bursts, and streaming media traffic." The servers themselves are all equipped with standard Intel hardware running Linux, 450-MHz processors, 15-Gbyte hard disks, and 1 Gbyte of DRAM, which enables requests to be served out of RAM.

Customers and Pricing Besides the customers mentioned above, Akamai is used by J.Crew.com to put the bulk of its mail order catalog online. All the images are served from the Akamai network. Apple Computer, on the other hand, uses the service for its QuickTime TV. To date, the company has used FreeFlow to serve the "Star Wars Phantom Menace" trailer and two of Steve Jobs' recent keynote speeches.

Akamai is now gearing up for the Oct. 9 NetAid Foundation concerts to be held in London, the Meadowlands (New Jersey) Arena, and Geneva. Cisco Systems and the United Nations Development Program are the primary sponsors of the event, benefiting the Foundation's effort to eliminate extreme poverty throughout the world. Taylor said he expects Akamai's network to serve about 60 million hits per hour and somewhere in the neighborhood of 125,000 simultaneous live video streams during the events, which feature such performers as Quincy Jones, Bono, and David Bowie. Individuals will be able to pledge contributions online. Akamai plans to have 1,200 servers in place when the concerts air.

Akamai charges all its customers according to the number of megabits per second. To maintain an available flow of 1 Mbps costs $2,000 per month, and volume discounts are available. Customers can burst above their allocated bandwidth, and are billed for that at a premium of 25 percent more than the negotiated monthly rate.

Among the challenges ahead for Akamai are to continue building out its network and to evangelize the virtues of content distribution.

Peter Christy of Collaborative Research, however, sees other, more difficult challenges ahead. "Putting stuff out on the edges of the network is a great idea--it's simple and not a very expensive proposition in Internet terms," says Christy. "But now Akamai has helped open the eyes of the market, and they're going to have lots of people coming after them. And there's a good chance their partners, the provider networks, will start demanding more from them, especially if the IPO goes well."

--------------------------------------------------------------------------------

Keywords:
Date: 19991001




To: Frank A. Coluccio who wrote (84)10/24/1999 8:09:00 PM
From: ftth  Read Replies (1) | Respond to of 1782
 
Can't say I've ever seen that either. Heck, if Alteon the pharmaceutical company got a little over a million from Alteon WebSystems for joint rights to the url, they exceeded their last year's revenues!!!
----

>>Assuming that they had 100 data centers, then we're talking about inundating the 'net with 50 Mbits * 100 sites, or 5 Gbits of traffic

I suppose it could be multicast, but probably not.

In the scenario you outlined, what if it were something with huge pull-demand. Say it was a Bill, Monica, and Pamela-Sue Video and 5 million people wanted it the instant it was released. In that scenario, the bandwidth used to perform the replication (and no I don't mean Bill's bandwidth<gg>) is far less that the backbone bandwidth that the demand for the video would have caused if there were just one server.

I guess it wouldn't have to be anything close to 5 million "pullers" for the numbers to still work out in favor of the distributed replication, or am I missing something?

dh



To: Frank A. Coluccio who wrote (84)10/25/1999 12:25:00 AM
From: Frank A. Coluccio  Read Replies (1) | Respond to of 1782
 
I like the way these folks from Quantum Bridge Communications think. Thanks to signist on the MRVC thread.

Message 11691595

(REUTERS) Start-up firm aims to make fiber optics affordable
Start-up firm aims to make fiber optics affordable

By Tony Munroe

BOSTON, Oct 25 (Reuters) - Aiming to shatter the data
bottleneck that constrains many mid-sized firms with heavy
bandwidth needs, a Massachusetts start-up will unveil Monday a
technology it said will bring high-capacity fiber optic lines
all the way to a company's premises.

Quantum Bridge Communications said its new technology can
be used by local phone and cable television firms to link
residential and small office users to existing fiber optic
networks much less expensively than current options.

The technology would allow people to send and receive
massive amounts of data, make voice calls, or watch movies
without any of the delays most Internet users now experience.
"You can't shove lots of bits down a copper straw," Jeff
Gwynne, co-founder and vice president for marketing of Quantum
Bridge, said in an interview explaining the dilemma many
companies face, even in areas with ample fiber optic networks.

To date, using fiber to connect the so-called "last mile"
of phone networks -- the segment that reaches the customer --
has been so expensive that only large office buildings in urban
areas have been hooked up to the networks.

Firms with high-bandwidth needs in smaller office parks or
buildings have thus been forced to rely on costly T1 or T3
phone lines, or cable modems or digital subscriber lines (DSL)
that use old-fashioned copper wire, both of which have limited
capacity.

The company, based in North Andover, Mass., has raised $22
million in venture capital and said it will deploy its
technology to unnamed "beta" customers over the next few
months, with general availability beginning in 2000.
Fiber optics are extremely hot, analysts noted.

Optical networking firm Sycamore Networks Inc. <SCMR.O>,
also based in Massachusetts, went public Friday and soared more
than 600 percent from its $38 a share offering price, settling
at an eye-popping $184.75.

Cisco Systems Inc. <CSCO.O>, meanwhile, paid $6.9 billion
in August to buy fiber optics firm Cerent Corp.
Several analysts said Quantum Bridge enters a welcoming
marketplace.

"The optical space is going crazy right now. Everyone's
trying to figure out how they can offer fiber to businesses
cost-effectively," said Andrew Cray of the Aberdeen Group.
Said Hillary Mine of Probe Research, "What Quantum Bridge
is doing in a nutshell is rapid improvement in the economics of
fiber-to-the-curb, fiber-to-the-building."

Any would-be direct competitors to Quantum Bridge "are
still probably in the labs," she added.

Quantum Bridge links a customer with a telecom carrier's
central office with "passive optical technology" using cheap
splitters and couplers at each fiber junction, instead of
"active" connectors with expensive power and maintenance needs.
The technology can provide bandwidth ranging from 1 to 100
megabits per second, the firm said, compared with 1.5 megabits
per second for T1 lines and 45 megabits for T3 lines.

((Boston newsroom, 617-367-4106; fax, 617-248-9563; e-mail,
Boston.newsroom@Reuters.com))REUTERS

*** end of story ***



To: Frank A. Coluccio who wrote (84)10/31/1999 3:41:00 AM
From: Jay Lowe  Read Replies (1) | Respond to of 1782
 
>> Assuming that they had 100 data centers, then we're talking about inundating the 'net with 50 Mbits * 100 sites, or 5 Gbits of traffic. Hmm. Is this in some way self defeating? What we've done here effectively is to multiply a single T3 of data into something that would require an OC-96.

Consider how well this would scale if everyone began to follow suit. What am I missing here, that would lead me to come to such a drastic error in my thinking? <<

Frank ... this is a queuing systems problem ... some standard references are:

Queueing Systems : Theory
by Leonard Kleinrock
amazon.com

Queueing Systems : Computer Applications
Leonard Kleinrock(Editor) / Library Binding / Published 1976
amazon.com

It's the basically same problem as memory paging ... whether it's more efficient to prefetch the resource into a faster store depends entirely on the traffic pattern.

If 50 people in Kansas want to listen to the Doors, it's better to move the file to Kansas and serve it from there.

If only one person in Kansas cares about the Doors, then the prefetch saves no time globally, but may make one Doors fan happier depending on the relative performance of Fan-Kansas, Kansas-global layers.

In practice, on the web as we know it, caching will only make sense in some cases ... it does not make sense to cache everything. The specific economics of any case cannot be quantified without knowing the traffic pattern and relative performance of the layer or domains ... and the cost and QoS options available.

The whole thing is actually quite hilarious ... the web caching guys seem to have completely forgotten the history of memory management.

Imagine that URLs are dynamicslly paged by layers of cache servers progressively "closer" to the edge based on Least-Recently-Usedness. In this case, the web behaves like the disk, RAM, L2, and L1 layers in a PC. Since the resource in this case (web pages) has many readers and one writer, it makes sense to prefetch the changes to the layer closest to the readers.

This whole game is just getting started and the players are making reinventing the same limited solutions that, say, Dec was forced to use in the PDP-11 (by the technology of the day).

The REAL money is in the automatic admin of the caching issue ... doubt it? Just look at a mask of a Pentium chip and see how the real estate is invested there.

Caching servers are cool but obvious ... what is non-obvious is how to get maximum pages near maximum readers at a cost minima.

We'll see 3-5 years of silly solutions before the real deal kicks in.



To: Frank A. Coluccio who wrote (84)11/3/1999 12:28:00 PM
From: Valueman  Read Replies (1) | Respond to of 1782
 
Surely, no one would propose such a solution to congestion on the 'net... by creating many times more traffic than necessary simply to satisfy an individual user's urge to listen to the Doors. Right? Even if multicast distribution schemes were used to refresh every site...

Sounds like job for the GEO sat folks