SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Frank Coluccio Technology Forum - ASAP

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: JayPC who wrote (625)12/4/1999 1:43:00 PM
From: Jay Lowe  Read Replies (1) of 1782
 
>> Would type of browser congest the net too much?

What you're talking about is called anticipatory paging ... or anticipatory caching in the pseudo-new net jargon.

See:
Operating System Concepts, Fifth Edition, by Abraham Silberschatz and Peter Baer Galvin, Addison-Wesley, 1998
Or more briefly:
cs.uiowa.edu
moscow.cityu.edu.hk
cogs.susx.ac.uk

From scis.nova.edu

Anticipatory Paging
==============================================================================
o Predict and preload pages in advance
o Spatial locality can be used to load clusters of virtual pages
o Advantages
- Total process run time is minimized significantly
- Accurate low overhead guesses can be made in many cases based on locality
- As hardware becomes more economical, the effects of a bad decision are less serious
o Disadvantages
- Wasted storage when you guess wrong
- How long do we keep them there until we give up on them being used

In the web URL context, "spatial locality" is very well-defined and easy to process. Locality instabilities are also easy to predict from typical browsing behavior ... pages could be arbitrarily aged out of the working set when the user clicks past them ... they are replaced or not on a LRU basis depending on the cache size.

One can easily imagine a combination of heuristics and learning which could keep a local cache full of spatially local pages using a predefined set of resources (local disk and bandwidth). Imagine keeping a digraph of URLs on the client side ... a structure of URLs with traversal probabilities. The digraph is continuously and incrementally learned based on experience. From any point on the graph, one could look-ahead traverse the tree of most probable pages and pre-fill the cache subject to the resource constraints. Pages could be marked in the cache according to the strategy used to access them.

This would be the L2 paging service; the L1 service being the simple demand paging. Additional intelligence is easily available by watching the performance of L1 and L2 ... so an L3 (or "policy level") service could adjust the L2 parameters based on it's success in pre-fetching. Various policies might be imagined ... "I see the Java VM process is starting (DLL or COM load) ... the user will not be driving for awhile ... so shift the L1/L2 cache allocation toward preference of demand paging" or "I see more than X% TCP/IP traffic without HTTP requests ... I am being bypassed ... reduce my bandwidth goal by Y%".

In the L3 policy arena, heuristic and experiment rules.

Paging strategies completely affect a node's traffic behavior. "Lazy" demand paging has very bursty behavior and is fair among multiple requesters ... it prioritizes minimized I/O over total process execution speed. It has no inherent QoS policy issues. Anticipatory paging is more aggressive ... it seeks to convert a more continuous I/O demand pattern into shorter process span ... and carries intrinsic QoS policy implications.

There's an interesting commercial implication here. ATHM could deploy anticipatory paging which scales to segment demand since they know their architecture and can know demand incestuously. AOL and others cannot. Hence, ATHM can deploy a "smart browsing" experience while containing the QoS implications ... and can scale the smarts as the resource balance shifts. In fact, they can even migrate the cache from the client to the head-end ... they already do demand paging there ... they could add proxy anticipatory caching to the headend box. This would be an interesting area for them to look into ... another unique advantage of their architecture.

I'll betcha there are already browser accelerators which do this sort of thing, and that the functionality will migrate toward the core. The Akami and Sandpiper guys live this sort of thing ... this is the chapter on OS fundamentals that they have taken IPO. ;-)
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext