SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Strategies & Market Trends : Gorilla and King Portfolio Candidates -- Ignore unavailable to you. Want to Upgrade?


To: om3 who wrote (18416)2/22/2000 10:53:00 PM
From: Dinesh  Read Replies (1) | Respond to of 54805
 
Steve, Buck

There are further reasons to be skeptical of the huge
latency calculated in the CacheFlow white paper:

cacheflow.com

One, most browsers have capability to open multiple
connections to retrieve the referenced objects (images)
in parallel. I think 4 is default. Based on your connection
speed one should set it higher - like 50 - as long as your
desktop can handle it. Worth experimenting with.

Two, the diagram about latency is wrong. An 'Open TCP'
requires 3 one-way trips between the client and the server.
So, even though in relative terms the savings are the same,
the order of magnitude is 1.5 times higher.
There are also socket teardown costs which haven't been
mentioned.

Three, as Steve mentioned, there is the KeepAlive protocol,
which became a standard with HTTP 1.1. Servers are required
to keep the connections open for a certain length of time
to accept additional requests and this cuts down on the
connection time.

Finally, the biggest cause of delay at user end is the
on-ramp speed. At 56 kbps, a 100KB page will take about
16 seconds. I am adding up the page plus them 45 images
to reach 100KB. Doesn't matter what speed data travels on
the backbone. Now, an extra 3-4 seconds don't seem so
hard.

Caching is good for many reasons, but none of these. Akamai
is better for deploying close to user. Standard proxy servers
offer caching.

So, technical errors and tendency to exaggerate in the
white paper. I suppose just a PR.

BTW, who gives a hoot about those cnn users in London...

Regards
Dinesh



To: om3 who wrote (18416)2/23/2000 1:09:00 AM
From: buck  Respond to of 54805
 
om3, I'll have to read up on HTTP 1.1. My understanding of HTTP is consistent with what is presented in the white paper. It would make sense that a new version of HTTP would maintain a persistent connection.

I'll take your word for it...that only one TCP session has to be opened. Let's also assume that your browser is making multiple threaded requests for the web page's objects. You still must suffer through the .1 second delay for the coast to coast delivery...if you have a direct fiber connection from your browser to the web server. Which you don't have.

Not to mention that you now have to have big fat pipes to stuff that content down. A DS-3 will induce slowdown at this point, if you're putting more than 5K bytes down that pipe. An OC-3 will be the bottleneck if there are more than 19K bytes to stuff in it. Guess how big that SI logo is in the corner. 3K bytes!

Now, fetch an object from your local intranet. This is what you are doing with an "Akamaized" web. Akamaized web content latency is, for all practical purposes, unmeasurable relative to external network object fetches.

My point, and Akamai's point, is that no external server is going to serve content faster than a localized one. There should be no contention on this point. And 1/10th of a second, as your example gives us, is noticeable by an end user. That may be hard to believe for some, but personal and observed experience bears this out.

buck