SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Frank Coluccio Technology Forum - ASAP -- Ignore unavailable to you. Want to Upgrade?


To: RTev who wrote (83)10/24/1999 4:28:00 PM
From: Frank A. Coluccio  Read Replies (5) | Respond to of 1782
 
Rtev, Thread, have you ever seen this before? Where the same url leads you to a choice of two sites which are maintained by different owners? Maybe I've been a bit too sheltered, but I haven't seen this before (or just haven't noticed):

alteon.com

-- Re: Caching and some recent Shortest Distance Download Alternatives

Here's some food for thought:

I began looking at some of the alleged competition for Akamai, and I came across F5, Alteon Web Systems, and Sandpiper. I was contemplating the advantages of using AKAM's 1200 servers - which they now claim to have in existence- whatever that means - making it possible for a more immediate "shortest distance" download from the closest site, based on their algorithm-based pointers. Distributed over how many different caching locations? This led me to wonder:

If a content provider used this vehicle of delivery, what are they doing to the rest of the 'net? Consider those audio or video files that range into the tens or hundreds of of MByte sizes.

A 5 MB file, for example, which is roughly 50 Mbits or a T-3's worth of data [5 MB * 8 to 10 bits per byte, with overhead], would need to be sent to all of the "data centers" or exchanges where these servers were housed.

In the case of an outfit like AKAM, that means that instead of a one time shot of 50 Mbits emanating from their own, or an ASP's, location, there would be an impact on the 'net of:

50 Mbits * the number of server locations they are sending to on each transfer or update.

Assuming that they had 100 data centers, then we're talking about inundating the 'net with 50 Mbits * 100 sites, or 5 Gbits of traffic. Hmm. Is this in some way self defeating? What we've done here effectively is to multiply a single T3 of data into something that would require an OC-96.

Consider how well this would scale if everyone began to follow suit. What am I missing here, that would lead me to come to such a drastic error in my thinking? Anything? Surely, no one would propose such a solution to congestion on the 'net... by creating many times more traffic than necessary simply to satisfy an individual user's urge to listen to the Doors. Right? Even if multicast distribution schemes were used to refresh every site...

I suppose for static files this is OK if the number of downloads is high enough to offset the initial load. But how about dynamic files, changing web page content, etc.? Like I said, food for thought.

Comments (and undoubtedly, some corrections) would be welcome.

Regards, Frank Coluccio