SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD)
AMD 248.09+6.2%10:34 AM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Scumbria who wrote (7161)8/31/2000 5:31:56 PM
From: TimFRead Replies (1) of 275872
 
Scumbria, I'm interested in your take on this (from Paul Demone) -

The L1 cache is a small part of the overall memory system in an MPU. In a chip like Wilma the L1 serves primarily as a bandwidth-to-latency "transformer" to impedance match the CPU core to the L2 cache and reduce the effective latency of L2 and memory accesses. The big performance loser is going off chip to main memory and the 256 KB L2 in Wilma is what is relevent to that, not the 8 KB dcache. The size of an 8 KB cache is insignificant compared
to the scale of the Wilma die and could easily be larger. I think the reason it isn't larger is because Intel wanted to hit a 2 cycle load-use penalty and at the clock rate Wilma targets a larger cache would be a speed path. An 8 KB
dcache has a hit rate of around 92% and an 32 KB cache around 96%. A Two cycle 8 KB dcache beats a much larger three cycle dcache for the vast majority of applications given the rest of the Wilma memory system design.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext