SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : AMD, ARMH, INTC, NVDA
AMD 247.99-4.2%3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: fastpathguru who wrote (15940)5/10/2016 8:07:46 PM
From: pgerassiRead Replies (2) of 72341
 
AMD has already told us how they will create scalable GPUs. There will be a core GPU of say 32 GCN CUs, 128 TMUs, 64 ROPs, 4 ACEs and all of the uncore stuff like encode/decoders, display ports, HDMI, PCIe, etc. In each of the attached HBM2 the logic die will contain an additional 16 CUs, 2 ACEs, 64 TMUs, 32 ROPs. And there will be HBM2 stacks without logic for cases where one just needs more memory. So a single base die and a HBM2 logic die allows GPUs having from 32 to 96 CUs with memories to match. The smallest with 1 HBM2 stack and no GPU logic would take care of entry level VR around what a 290 gets. With 1 stack including the GPU logic die, you get 48 CUs and go above a 290X/390X. 2 HBM2wL stacks gets better than FuryX performance, 3 HBM2wL stacks probably takes out the top Pascal with plenty to spare, and 4 HBM2wL is king dGPU. A larger core die with 64 CUs could attach 8 HBM2wL stacks and have 192 CUs total, 20 ACEs, 768 TMUs, 384 ROPs, 64GB of VRAM and handle 6-10 VR headsets simultaneously.

Pete
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext