SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices -- Ignore unavailable to you. Want to Upgrade?


To: Scumbria who wrote (86165)1/9/2000 4:12:00 PM
From: Saturn V  Respond to of 1573849
 
Scumbria < Ref -Cache size imapact >

You are right the law of diminishing return sets in for cache size.

The rule of thumb I have used is that the incidence of cache misses varies inversely with the square root of cache size. However this is a "general rule", and will change with each benchmark.

Thus there will be improvement in performance with cache size, but the exact amount will change with each specific application. Servers , I believe, typically show greater improvement than most "consumer type" applications, like word processing etc, because servers have very large data sets, which leads to more frequent cache misses.



To: Scumbria who wrote (86165)1/9/2000 5:57:00 PM
From: Tenchusatsu  Respond to of 1573849
 
Scumbria, <I've seen data for Winstone which shows that performance improvements diminish rapidly after 256KB of cache. Going from 32KB to 256KB cache improves Winstone scores by about 50%. Going from 256KB to 512KB of cache only produces 5-10% improvement.>

That's probably why Intel chose 256K as the sweet spot for the P6 architecture. Pentium Pro started with 256K. Pentium II went to a half-speed cache, so they had to increase the size to 512K. Now with Coppermine, Intel felt free to go back to the original 256K.

I think the big question on people's minds is how well the Coppermine-128 (Celerons running faster than 533, currently unreleased) compares to Coppermine-256, all else being equal. And then how Coppermine-256 compared to Coppermime-512, etc.

One thing is for sure. Intel's on-die cache technology has been highly successful. We're bound to see on-die caches in almost all of Intel's future processors (hint, hint), except for Merced. I'm sure AMD will also stick to on-die cache exclusively in the future, provided they can shake the demons of the K6-III.

Tenchusatsu



To: Scumbria who wrote (86165)1/9/2000 9:12:00 PM
From: Charles R  Respond to of 1573849
 
Scumbria,

<Nevertheless, larger caches are inevitable. Improving compiler technology and instruction sets will make better use of caches in the future.>

I presume you are talking about prefetch here. Given how sensitive prefetch is to any given architecture/implementation people do not seem to expect OS/applications folks to move there anytime soon.

I hear software guys saying that this is more for benchmarking than real life applications. Do you disagree? Or, are you referring to something other than prefetching.

Chuck



To: Scumbria who wrote (86165)1/9/2000 11:33:00 PM
From: Process Boy  Respond to of 1573849
 
Scumbria - <The problem is that caches are only valuable for instructions/data that are used more than once, and not for the first time they are used. A certain percentage of the data does not come in that category, and even an infinite sized cache will not improve hit rates beyond a certain value.

Nevertheless, larger caches are inevitable. Improving compiler technology and instruction sets will make better use of caches in the future.>

Thank you for your analysis.

PB