SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: eracer who wrote (215047)10/26/2006 8:56:27 AM
From: j3pflynnRespond to of 275872
 
eracer - I believe that would warrant a "Touché!" ;)



To: eracer who wrote (215047)10/26/2006 4:00:23 PM
From: pgerassiRead Replies (1) | Respond to of 275872
 
Dear Eracer:

That isn't a problem for AMD as DCA makes multiple cores just as easy a way to higher performance. Your clock rate BS is just plain irrelevant. ATI and nVidia had to work with bulk foundry processes that were optimized for yield, not speed or power. They had to work with higher bandwidth memory because they needed to eek out that last few % of performance. ATI, now a division of AMD, now has the superior optimized for speed and power CPU grade processes. These are available for the Fusion family of CPU/GPU dies.

You are the one in error using larger bulk, high yield process GPUs to estimate what similar performance GPU cores using a smaller speed and power optimal CPU ssSOI process would need in die area. AMD and Intel had to push clock rate because of the serial nature of most programs. They had to lower memory latency because of the significant hit to performance of their fast cores. They used OOO and branch prediction to help performance due to the wide gulf between memory speed and CPU speeds.

GPU makers were limited in speed. They were pushed into the multiple pipelines. They didn't have branching problems and problems in latency in those early years. Their tasks had a lot of inherent parallelism. Only with shading programs are they running into the same issues that CPU makers have had to wrestle with for a long time. They use the high BW path to memory to lower latency now that they are outstripping memory too. These problems have been solved by CPU makers with OOO, branch prediction, multi level cacheing and prefetching. Now all of this is now available to ATI. The process is available to nVidia through both IBM and Chartered, if they want to pay for the better processes.

If you continue with the clock rate BS, you should remember what happened to Prescott. Higher clocks lead to leakage and higher heat output. If you don't believe me, take it from the overclocking crowd. They know to speed up the CPUs, you need to use better cooling. That is the hallmark of a thermally limited CPU core. A speed path limited core doesn't speed up much when cooled. That means that power consumption is now a problem for clock rate increases. That is also why twice the cores at 80% of the clock rate at the same power perform better in a multi processing environment.

Pete