SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Boplicity who wrote (46149)6/27/2000 1:25:00 AM
From: Tenchusatsu  Read Replies (1) | Respond to of 93625
 
Greg, <they moved L2 on to the brick we call pent III now and I doubt INTC is going to let IBM move on to their daughter card anytime soon...>

It doesn't work that way. IBM's "memory compression" trick requires the use of a large cache on the motherboard. This cache would function as a sort of "L3-cache," and it will not replace the L1 and L2 caches on the Pentium III (Xeon).

Furthermore, I don't think this L3 cache will necessarily reduce the requirements for memory bandwidth. Rather, I believe this cache will just be used as a sort of "workspace" for the memory compression and decompression. For example, data is written from a processor to L3 cache, data gets compressed, then data is written out to DRAM. Or data is read from DRAM, gets decompressed in L3 cache, then gets to the processor.

The odd thing about this memory compression trick is that it will likely increase the latency of reads from processor to memory. This is because of the extra decompression step necessary after data is read from DRAM. But I think IBM is betting on the hope that the "perceived 2x increase in DRAM" will decrease accesses to hard disks. And those accesses are orders of magnitude slower than DRAM accesses. So it's a trade-off: Increase average access times to DRAM, but decrease accesses to storage devices.

Tenchusatsu