SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: mas_ who wrote (252810)6/5/2008 12:27:41 PM
From: dougSF30Read Replies (1) | Respond to of 275872
 
The same way intelligent pre-fetching hides latency to main memory with Core2. Recognize access patterns, and speculatively fetch anticipated data ahead of time.

Clearly one could in theory construct some random access scheme, tuned with the appropriate parameters, to make a smaller, faster L2 + much slower L3 + faster main memory perform worse than a slower L2 + much slower main memory.

But good luck finding any application in which the benefit of the big (yet slower) L2 outweighs the benefit of Nehalem's hierarchy, overall.

You realize that Intel does, you know, simulate performance of a wide range of applications before they commit to a new cache design, right?