SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: BUGGI-WO who wrote (236951)7/24/2007 12:04:20 PM
From: wbmwRead Replies (3) | Respond to of 275872
 
Re: why should a CPU+GPU (2 DIEs, 1 Package) be faster compared to 1 CPU + 1 GPU Card with the same GPU die?

It won't be faster than any discrete graphics solution, but it will be faster than chipset graphics. Primarily, due to the integrated memory controller, graphics on the CPU package has a shorter distance to main memory. And since integrated graphics solutions stream from main memory, this will improve performance.

Re: -> AMD has a working CPU, anyone places the GPU (working) on
the package and the whole thing is broken -> both broken
-> how do you allocate different CPU speeds with GPU? -> how
many xyz MHz CPUs? with how many zyx GPU Cores?


These are all issues that AMD will have to solve, but they are not insurmountable. AMD has already delayed Fusion twice from their original lofty goals of a 2008 merged CPU/GPU design. First, they began talking about 2009, and second, they started talking about MCM packages. They evidently realized that "fusing" the two designs will not be easy. IMO, it's still years away from being feasible, and the first generation will be very crude.

Re: In the end, the whole theme is way more difficult and brings
many EXTRA headaches which could go wrong and you end with ->
nothing extra for the customer.


The customer should enjoy better performance and lower power, even with AMD's reduced goals of an MCM solution.

Re: Keep in mind, that such solution will ALWAYS be LOW-END, be-
cause if the CPU burns 65W as one example, with how much more
heat could the package work -> a reasonable simple guess
should be additional 60-70W, which would bring the "one package" in
the 120-130W range with 2 DIEs.


Maybe in the discrete world, but Fusion is more of an effort to bring integrated graphics into the CPU package. If the 965G is any judge, then integrated graphics is about a 10W adder today. In the time frame of Fusion, I can see 65W for the CPU and as much as 30W for the GPU, giving a 95W TDP chip, much like AMD's current designs. And of course, it will probably be lower than this, since power viruses that attack the CPU will be too busy to also attack the graphics chip. Therefore, when the CPU is close to 100%, the GPU will be less than 100%, and vice-versa. When you knock a few watts off of that proposition, then suddenly you get the 89W TDP that AMD uses today. Imagine the same thing with a 45W TDP CPU and 20W TDP graphics core. You'll get a 65W package, minus a few watts.

Re: What do you do, if the GPU
will change -> new layouts?


Yes. Fusion means that the CPU and GPU have to come out at the same beat rate. AMD cannot double the number of tapeouts to produce Fusion chips with CPUs being updated at different times than GPUs. Just think of the floating point unit. Years ago, this used to be separate, but when Intel integrated it onto the CPU, they had to work within the constraints of an integrated design. Integrating the GPU will be similar, IMO.

Of course, none of this invalidates the need for discrete graphics. There is no way that you can combine top-of-the-line discrete designs onto the CPU. You would end up with reticule sized die that are expensive to make and yield near zero (G80, for example, is 484mm^2 die - add a dual core CPU to that, and you can barely manufacture it, let alone handle the power requirements).

So as you said, Fusion is a mainstream play, not a high end play. It will enable better integrated graphics, and compete with Intel's Nehalem design, which essentially uses the same MCM approach.

Re: The whole Fusion concept in this environment doesn't look
very appealing to me.


It's not supposed to be appealing to enthusiasts, but it does improve the mainstream designs.



To: BUGGI-WO who wrote (236951)7/24/2007 12:46:04 PM
From: combjellyRead Replies (1) | Respond to of 275872
 
"Keep in mind, that such solution will ALWAYS be LOW-END, be-
cause if the CPU burns 65W as one example, with how much more
heat could the package work -> a reasonable simple guess
should be additional 60-70W, which would bring the "one package" in the 120-130W range with 2 DIEs. What do you do, if the GPU
will change -> new layouts? "

Not necessarily. Rumor has it the next generation ATI parts will be modular. So, low end graphics will be a couple of cores and the higher end graphics will be more, which may or may not be on the same chip. If Fusion uses one of those cores, then, at least in principle, better graphics performance can be achieved by adding another chip or two.

Fusion is supposed to be aimed at the mobile market, at least at first. It makes more sense there, any way.



To: BUGGI-WO who wrote (236951)7/24/2007 1:41:43 PM
From: romusRespond to of 275872
 
Keep in mind, that such solution will ALWAYS be LOW-END, be-
cause if the CPU burns 65W as one example, with how much more
heat could the package work -> a reasonable simple guess
should be additional 60-70W, which would bring the "one package" in
the 120-130W range with 2 DIEs. What do you do, if the GPU
will change -> new layouts?


Why you think AMD will have a 65W CPU in Fusion? Today they have 45W parts and their performance are more than enough for mainstream markets. To surf www, email checking, word processing, movie watching you don't need a top CPU.

Fusion is a modular design, graphics cores can be changed without much redesign.



To: BUGGI-WO who wrote (236951)7/24/2007 4:21:52 PM
From: pgerassiRead Replies (2) | Respond to of 275872
 
Dear Buggi:

On a MCM, HT links could easily go to the max speed and even use 32/32 cHT links over such a short distance. The older dies had 3 16/16 HT links so one 16/16 HT link can still go to the socket S1g and the other two combine to form the 32/32 HT link to the GPU. 5.2GT/s for 32/32 HT3 is 21.6GB/s each way. 2 DDR3/1600 channels have 25.6GB/s of BW available so it would be a good match for stream processing as well. It would be more BW available than the current R630 gets (Radeon HD 2600XT). At 80 stream processors and likely 45nm clocks of 1GHz would yield 160Gflops of SP or 80Gflops of DP. TDPs of the GPU would be around 35W or so. Couple that with a 35W Griffith core and you get 70W TDPmax with all 3 cores (2 CPU and 1 GPU) going full blast. The DTR market used to have 65W and 89W TDP CPUs in them.

The next generation in 2009 then would likely put both on the same die having a NB that moves 64 byte messages at 1 per clock. So a 2GHz Fusion would move 128GB/s in the NB and use 2 DDR4/3200 channels for 51.2GB/s all for 35W TDP. It might be on AMD's 32nm SOI with ZRAM for extra large L3 caches (64-96MB) or TSMCs 32nm bulk with ordinary SRAM L3 caches (14MB).

Pete