SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Joe NYC who wrote (101291)3/31/2000 12:07:00 PM
From: Scumbria  Read Replies (1) of 1573092
 
Joe,

the question is what is the improvement in the hit ratio in ging to 192 L1 + L2? Doesn't the CPU have to (on L1 miss) lookup the memory in L2 before going to main memory?

The way a victim cache works is- it allocates a new line when one is cast out from the L1. This guarantees (at least temporary) exclusivity from the L1. A victim L2 cache behaves as a slower extension of the L1.

As such, the L1/L2 hit rate can be approximated from the sum of L1+L2 cache sizes. This makes a victim L2 a very effective solution. The only reason that victim L2's have not been used in the past, is that the L2 was off chip, and it had no visibility into the L1. Victim caches can only be used in integrated L2 designs.

Answering your other question, there is no reason why you have to delay DRAM access pending an L2 lookup. (Although you may want to.)

Scumbria
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext