SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: eracer who wrote (215133)10/26/2006 10:28:34 PM
From: pgerassiRead Replies (1) | Respond to of 275872
 
Eracer:

There was no doubling of frequency and yields were poor on what GPUs IBM did produce. There is no evidence that AMD's SOI manufacturing could deliver anywhere near a 100% frequency increase over bulk at TSMC.

IBM didn't get good yields. That's what did them in. Chartered and AMD don't have any problems like that. Fab 30 is world class per Sematech. And how fast is the latest at TSMC for CPUs? Here they are at just 400MHz at 150nm: ieeexplore.ieee.org

AMD K7 Tbirds were at 1.4GHz using 180nm copper. 130nm SOI pushed that to 2.6GHz A64 4000+ 939. There is plenty of evidence that at 65nm a GPU pipeline could run at 2GHz. Top end DC K8 AM2 CPUs run at 2.8GHz on 90nm using at most 120W. Given that, a 2GHz pipeline would use 30% of the power on the same 90nm process assuming the same power is used by each. The K8 could be seen as 3 pipelines in parallel as far as power.

Lets use transistor counts to figure power usage. A RD600 has 500+ million transistors and has 64 pipelines. That's about 8 million transistors per pipeline and that includes memory controllers, PCI-E interface and some cache. The K8L core uses about 24 million transistors sans the L1 caches. So using 1/3 of a K8 is about right for a RD600 type unified shader pipeline.

So a 90nm RD600 pipeline would use about 20W at 2.8GHz, 2.5W at 1.4GHz and 0.3W at 700MHz, the speed of the bulk 80nm RD600 which is estimated to be 383mm2 in size. 64 of them would use 19.2W at 700MHz where the 80nm bulk process uses 250+W. That is a savings of 13 times on AMD's mature 90nm process. It likely would be smaller than TSMC's 80nm process because AMD's 90nm uses more metal layers and has local interconnect. So each 90nm RD600 pipeline would use about 5mm2.

Eight RD600 pipelines would fit in a Rev F core, so you would get 1/8th a RD600 GPU using 2.5W for a 1x1 Fusion (with 128KB of L1 and 512KB of L2 for the GPU). Pushing it to 1.4GHz would net you 1/4 of a RD600 using 20W. Going to 2.1GHz gets you 3/8ths of a RD600 using 67.5W. Running it at 2.8GHz would net you 1/2 of a RD600 using 160W. Pushing all of these down to 65nm takes us to 1.25W, 10W, 33.8W and 80W respectively. The 2.1GHz flavor matches about 1/2 of a 65W DC with a total of 16.8GHz GPU pipeline. Taking that down to 2GHz to get a match with a 65nm AM2 A64 X2 3800+, we get the GPU using 28.8W. Using X1K Radeon on a MHz basis that would be equal to 32 pipelines at 500MHz. Thinking that the RD600 flavor might be better at performance per clock would mean that that simple 65nm 1x1 Fusion would be better than a single core 2GHz AM2 A64 3200+ Radeon X1900XT combo.

Taking that down to 45nm we go down to 35W TDP for the 1x1 Fusion. That would be mainstream today and value in two years. A 1GHz Fusion 1x1 would definitely work for low end value PCs and use just 4.5W. PICs, OLPCs and palmtops would use a 500MHz version using a scant 0.5W and still have the equal of an 8 pipeline DX-10 mainstream GPU of today akin to a Radeon X1600 or Geforce 7600. The 1Ghz 1x1 Fusion would do justice to most of today's games. Most of us would be ok with a 2x2 2GHz Fusion at 70W with as much power as a X2 3800+ Geforce 7950 SLI setup. Enthusiasts would want a dual 2Ghz 2x6 Fusion at 140W each with as much performance as today's top setups. Extreme gamers will add external discrete GPUs and use the internal ones to do lots of physics calculations to carve their virtual environments with plenty of blasted off bits. Or watch with glee as their opponent's fighter buys the farm turning a town square into rubble in the process.

Is that good enough for a guesstimate?

Pete



To: eracer who wrote (215133)10/27/2006 1:54:59 AM
From: NicoVRead Replies (1) | Respond to of 275872
 
NVIDIA had the SOI option with IBM. It didn't happen. There was no doubling of frequency and yields were poor on what GPUs IBM did produce. There is no evidence that AMD's SOI manufacturing could deliver anywhere near a 100% frequency increase over bulk at TSMC.
Did Nvidia ever built a GPU on SOI? I guess not. Yesterday, Soitec announced that they would collaborate with ARM to build libraries for SOI based processes. That means that in the past, no such libraries were available. IIRC, NVidia is a customer of ARM (actually, Artisan Components, which was bought a couple of years ago by ARM).
Is it a coincidence that 1 day after the ATI-AMD merger, an announcement is made that makes SOI based GPU's possible?