SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: pgerassi who wrote (215096)10/26/2006 4:43:17 PM
From: eracerRead Replies (1) | Respond to of 275872
 
Re: That isn't a problem for AMD as DCA makes multiple cores just as easy a way to higher performance. Your clock rate BS is just plain irrelevant.

Oh, so AMD isn't releasing an 8.8GHz single-core Athlon 64 by choice in a few months because it would be slower than a 65-nm dual-core 2.6GHz A64 X2 5000+ and AMD prefers larger dual-core dies to smaller single cores.

ATI and nVidia had to work with bulk foundry processes that were optimized for yield, not speed or power.

Good thing yields can go out the window once AMD starts producing integrated CPU-GPUs. That'll make them cheap to produce for those $100-$200 systems.

They had to work with higher bandwidth memory because they needed to eek out that last few % of performance.

A few %? LOL! More like a few dozen percent, and even more when cranking up the resolution, AA and AF.

You are the one in error using larger bulk, high yield process GPUs to estimate what similar performance GPU cores using a smaller speed and power optimal CPU ssSOI process would need in die area.

The only statement I made was what die sizes and power consumption would be in a straight shrink from 90-nm to 45-nm SOI. I never claimed that a straight shrink is what AMD had planned or that is was even practical. I'm quite sure the clock speeds would increase significantly, but going from 500MHz to 2GHz for what AMD claims will be a low power consumption, low performance system is a very big stretch in my opinion.

AMD and Intel had to push clock rate because of the serial nature of most programs. They had to lower memory latency because of the significant hit to performance of their fast cores. They used OOO and branch prediction to help performance due to the wide gulf between memory speed and CPU speeds.

GPU makers were limited in speed. They were pushed into the multiple pipelines. They didn't have branching problems and problems in latency in those early years. Their tasks had a lot of inherent parallelism. Only with shading programs are they running into the same issues that CPU makers have had to wrestle with for a long time. They use the high BW path to memory to lower latency now that they are outstripping memory too. These problems have been solved by CPU makers with OOO, branch prediction, multi level cacheing and prefetching. Now all of this is now available to ATI. The process is available to nVidia through both IBM and Chartered, if they want to pay for the better processes.


Based on AMD's comments so far and the rather short timeline it is a reasonable assumption that AMD/ATI have no plans to completely reinvent the GPU into the likeness of a CPU in the next two years, if ever. If AMD was serious about going that route they wouldn't have needed to buy ATI to do it.

If you continue with the clock rate BS, you should remember what happened to Prescott. Higher clocks lead to leakage and higher heat output.

So why would AMD/ATI want an integrated 2GHz 3-pipeline GPU for a low-power consumption, low performance platform when they could cut power consumption by using 6 pipes @ 1GHz instead?

If you don't believe me, take it from the overclocking crowd. They know to speed up the CPUs, you need to use better cooling. That is the hallmark of a thermally limited CPU core. A speed path limited core doesn't speed up much when cooled. That means that power consumption is now a problem for clock rate increases. That is also why twice the cores at 80% of the clock rate at the same power perform better in a multi processing environment.

Well, isn't it interesting that those same overclockers who try to overclock GPUs also use extreme cooling in order to keep temperatures down when overclocking? How many GPUs do you know of that have been overclocked to 2GHz? How about 1.5GHz?