SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Tenchusatsu who wrote (114260)6/5/2000 2:23:00 AM
From: Scumbria  Read Replies (1) of 1575464
 
Ten,

1) T-bird has a higher L2 associativity (16-way) than Coppermine (8-way). The benefit of this is slightly fewer L2 cache misses. The trade-off here, however, is the longer latency: a higher associativity slows down the cache. That's why the longer latency is required.

There is no reason why higher associativity should slow down an onboard L2 cache. Tag comparison can begin as soon as the TLB lookup is completed. Data array access is independent of associativity. The mux logic out can be set up immediately after tag comparison is completed.

2) The 64-bit BSB width is surprising to me. With the "exclusive" L2 cache (a.k.a. Victim Cache), I figure T-bird will need the BSB bandwidth a LOT more than Coppermine because of all the swapping of data between L1 and L2. Maybe AMD will increase the BSB width on Mustang.

BSB is an Intel term.

There will be much less swapping between the T-Bird caches, than between the PIII caches. This is because the PIII L1 is only 1/4 the size of the T-Bird L1, and has a much higher miss rate.

T-Bird does not need as much bandwidth between caches, as PIII. The only really important beat of data is the first one, and a 64 bit bus is more than adequate to fill that need.

Scumbria
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext