To: Frank A. Coluccio who wrote (630 ) 12/4/1999 3:37:00 PM From: Jay Lowe Respond to of 1782
>> such algorithmic horsepower would be working on a billion users' preferences Yep ... well, relative to actually shipping the data, the processing time and storage required on the client side is epsilonic. Consider the adaptive anticipatory paging scheme mentioned above. That would take <<1% of CPU and disk to implement as PC-client side code, and driving a 56K modem 80% of the time instead of 2% of the time would increase average browsing speed by ... 1000% ... something like that. Now, does the ISP freak on your 4000% increase in demand? Today, he does. Tomorrow, if his headend and your client are in conspiracy, maybe he doesn't, since he's ahead of you, using lower QoS to pre-fetch. Does the bone freak? Not according to the fiber-is-free bandwidth argument. The cost is in the outer-edge and the last mile ... and the user wait - the ultimate cost. I haven't read Gilder on storewidth ... link? My intuition, viz PCs, is that the intelligence cost is insignificant. A very generous intelligent pre-fetch cache might use 2Mb of disk ... whereas my IE5 demand-paged browser cache is currently set to 20MB ... see the difference? And on my 20GB drive both are in the noise. >> not having these capabilities embedded at some point be seen as a competitive disadvantage? Suppose ATHM builds such intelligent pre-fetch into it's client-side or head-end ... suppose AOL builds it into their client ... in this context, whoever doesn't is a major loser. As I mentioned, I like the idea that ATHM has a structural edge here ... technically and organizationally. On the other hand, AOL has a big attitudinal edge ... this is the sort of thing they go after. However, because of their dependence on the least-cost model, they probably can't afford to trade increased load for faster response. ======================================================= Uh, oh ... cool idea coming ... HTTP 4 Feature Request! A page, when accessed, states to the client how to anticipate pages which may not be explicitly linked, or how to prefer pages within the set of all those linked from the given page. I.e., the page explicitly declares predictive locality. "Hi, I am the FCTF Index Page you requested. As you look through me you will find whatever links you happen to find. What I know, as the server-side process, is that you are probably going to want URL1, URL2, and URL3 next, either because I'm designed that way or because previous traffic suggests this is true. So fold that into what you, as the client-side process know, ok?" All this is TOO obvious ... it must already be in the works. Anyone know about this? HTTP gurus?