SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : All About Sun Microsystems

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Win Smith who wrote (48965)5/13/2002 12:33:30 PM
From: rudedog  Read Replies (3) of 64865
 
Win - For sure the segmented memory model is an order of magnitude faster than disk. The reason it is maybe an order of magnitude slower than a flat memory model is not because the base memory is that much slower, but because the need to do a context switch to hit the page table forces the CPU I&D cache to flush, and because of the size of the context, often flushes L2 as well, which increases the memory access times by a factor of 16 or more. So even if the raw cost to go to RAM is only twice as long as for flat memory, in practice the shift out of application space, and the cache flush, makes that a moot point.

On the disk front, if the array controllers support duplexed striping across controllers, the locality of the data has less impact on the ability of the subsystem to deliver lower effective latency.

But there is another arcane point to consider - for single LRU faults, the subsystem cannot fetch a single page since the effective block size of the stripe is much larger than the 2K LRU page. Asynchronous drivers therefore attempt to aggregate fetches to avoid pulling data that is thrown away, but that does not always pan out.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext