SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : AMD, ARMH, INTC, NVDA -- Ignore unavailable to you. Want to Upgrade?


To: neolib who wrote (33769)10/31/2019 9:17:12 PM
From: VattilaRead Replies (2) | Respond to of 73147
 
Aye, lots of interesting stuff to look forward to on the GPU side as well.

Nvidia has tensor cores on their die, so they are well prepared for the needs of the AI/ML crowd. I expect AMD to follow. Also, there is much focus on data types in GPU architecture, specific to the needs of AI/ML calculations (narrow types such as BFloat16). This was mentioned in a recent AMD talk, I seem to remember. So for customers looking for general-purpose, such as supercomputer users, I think GPU will still be strong, perhaps combined with FPGAs for even more efficient implementation of algorithms as needed.

As you say, ASICs will probably increasingly take share in servers for which the workload is fixed.

It will be interesting to see whether Nvidia or AMD will introduce chiplet design for GPUs. If feasible (which I guess is down to chiplet interconnect speed and power-efficiency), then it is easy to imagine a small chiplet with a certain amount of compute units, another with rasterisation units, as well as various accelerator chiplets with tensor cores, raytracing cores, video codecs, etc., all combined as needed for each market segment.