SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: pgerassi who wrote (107345)4/22/2000 10:06:00 AM
From: Scumbria  Read Replies (1) of 1573124
 
Pete,

A LFU Fully Associative 64K Data Cache will have a higher Average Hit Rate than a Simple Replacement Direct Mapped 512K Data Cache when dealing with Programs having a Working Set of 1M or more. It will also take less die area. This is where I think that most current designers are falling down.

The problem with implementing highly associative caches is the impact on clock speed, power, and area. A fully associative 64K cache with 64 byte line size would require 1024 parallel comparators for tag comparison. This would cause tremendous loading on the address bus (with serious timing consequences), burn a lot of power, and occupy a lot of area.

There are very good reasons why cache associativities are usually limited to 4-way.

Scumbria
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext