SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : AMD, ARMH, INTC, NVDA -- Ignore unavailable to you. Want to Upgrade?


To: fastpathguru who wrote (15940)5/10/2016 4:29:11 PM
From: neolibRespond to of 72331
 
I think this is a very conscious decision on AMD's part

Yes, the article more or less stated that was the case for both AMD and Nvidia with this generation. But I'm simply looking at the decline in PC's for 5+ years, and that nicely mirrors the period where each new CPU generation brought only modest performance improvements, and perhaps a bit better than that on power efficiency. And the result has been a steady market decline (yes, and of course the Fad has been ravishing PCs as well, so...)

So, what fraction will upgrade GPU's if:

1) +20% performance, @ 50% power (the claimed case here)
2) +0% performance, @ 50% power
3) +50% performance @ 100% power.

Make up so more as needed. Its simply a question of what motivates people to open their wallet. For servers/HPC, obviously the perf/watt is important, and in mobile as well. But these GPU cards are going in desktops, so what does it take to get them to upgrade. I'd guess that VERY few DT user would upgrade for 2) above, i.e. 0% increase in performance even if the power was cut in half.

I think the last 4-5 generations of Intel/AMD CPUs has shown this to be the case.



To: fastpathguru who wrote (15940)5/10/2016 4:32:04 PM
From: neolibRespond to of 72331
 
Find a good wafer and cut out a block of 16.

Would there not be some serious thermal expansion/pkg issues with this? There must be a limit to the die size you can bond to the interposer plus how would you attach the HS? The HS would need to be segmented for thermal reasons to I would think.



To: fastpathguru who wrote (15940)5/10/2016 8:07:46 PM
From: pgerassiRead Replies (2) | Respond to of 72331
 
AMD has already told us how they will create scalable GPUs. There will be a core GPU of say 32 GCN CUs, 128 TMUs, 64 ROPs, 4 ACEs and all of the uncore stuff like encode/decoders, display ports, HDMI, PCIe, etc. In each of the attached HBM2 the logic die will contain an additional 16 CUs, 2 ACEs, 64 TMUs, 32 ROPs. And there will be HBM2 stacks without logic for cases where one just needs more memory. So a single base die and a HBM2 logic die allows GPUs having from 32 to 96 CUs with memories to match. The smallest with 1 HBM2 stack and no GPU logic would take care of entry level VR around what a 290 gets. With 1 stack including the GPU logic die, you get 48 CUs and go above a 290X/390X. 2 HBM2wL stacks gets better than FuryX performance, 3 HBM2wL stacks probably takes out the top Pascal with plenty to spare, and 4 HBM2wL is king dGPU. A larger core die with 64 CUs could attach 8 HBM2wL stacks and have 192 CUs total, 20 ACEs, 768 TMUs, 384 ROPs, 64GB of VRAM and handle 6-10 VR headsets simultaneously.

Pete



To: fastpathguru who wrote (15940)5/10/2016 11:15:36 PM
From: THE WATSONYOUTHRead Replies (1) | Respond to of 72331
 
IMHO AMD wants to put R9-level performance in every near-entry-level PC, and you can't (and now don't need to) do that with a 200-300w GPU.

....yes...that seemed to be what Papermaster was inferring in his pitch. The HBM allows much smaller form factors at much reduced power..........sounded like they wanted to take it down as low in the stack as possible.......hope they price it aggressively.