SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices -- Ignore unavailable to you. Want to Upgrade?


To: Tenchusatsu who wrote (39433)10/15/1998 8:19:00 PM
From: Scumbria  Read Replies (1) | Respond to of 1573834
 
AMD went to a simpler 2-way set-associative design for its L1 cache, instead of the 4-way implementation that Intel chose for its L1 cache. It's kind of a compromise between a very fast direct-mapped cache (which is prone to cache-thrashing) and a more sophisticated, but slower 4-way set-associative cache.

Ten,

There are certain problems associated with building a 128K 2 way cache. The page size in windows is 4K and the implication of the large cache is that several bits of the cache index will have to come out of the TLB. This will require either 1) Several extra muxing stages at the data output to select from all the possible indexes, or 2) Holding off on the data access until the physical address is available from the TLB.

Either one has a performance penalty which may or may not be less than the penalty associated with a large parallel tag compare in a higher associativity cache.

The performance of a direct mapped L1 cache would be a disaster. I'm not sure that a 2-way cache was a good idea either.

Scumbria