Steve, Buck
There are further reasons to be skeptical of the huge latency calculated in the CacheFlow white paper:
cacheflow.com
One, most browsers have capability to open multiple connections to retrieve the referenced objects (images) in parallel. I think 4 is default. Based on your connection speed one should set it higher - like 50 - as long as your desktop can handle it. Worth experimenting with.
Two, the diagram about latency is wrong. An 'Open TCP' requires 3 one-way trips between the client and the server. So, even though in relative terms the savings are the same, the order of magnitude is 1.5 times higher. There are also socket teardown costs which haven't been mentioned.
Three, as Steve mentioned, there is the KeepAlive protocol, which became a standard with HTTP 1.1. Servers are required to keep the connections open for a certain length of time to accept additional requests and this cuts down on the connection time.
Finally, the biggest cause of delay at user end is the on-ramp speed. At 56 kbps, a 100KB page will take about 16 seconds. I am adding up the page plus them 45 images to reach 100KB. Doesn't matter what speed data travels on the backbone. Now, an extra 3-4 seconds don't seem so hard.
Caching is good for many reasons, but none of these. Akamai is better for deploying close to user. Standard proxy servers offer caching.
So, technical errors and tendency to exaggerate in the white paper. I suppose just a PR.
BTW, who gives a hoot about those cnn users in London...
Regards Dinesh |