SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : AMD, ARMH, INTC, NVDA -- Ignore unavailable to you. Want to Upgrade?


To: fastpathguru who wrote (16107)6/2/2016 11:02:35 AM
From: engineerRead Replies (1) | Respond to of 74034
 
I think it is far more advanced on silicon interposers than you think.

An Interposer is nothing more than a 4-6 layer silicon device with HUGE metal lines. Not hard to make, can be done in an old retired 65nm line quite cost efficiently.

Interposers will become the circuit boards of the future, mixing different processes and feature sizes on a single device. This way you can rev part of your design and leave the other part alone, such as reving the AP processor part and leave the Modem alone. Or upgrade memory in the part without a complete mask spin, just a new interposer.

For power, it will reduce the chip to chip power by a lot, but not as much as a monolithic die can do, however as we go forward, I see less and less of die shrink as opposed to lower power or faster speed.

Stacking die is already being done in volume and this will become so common place that in 3 years, all the cell phone chips will probably have gone this way. FLASH, DRAM, and SRAMS have already gone this path.



To: fastpathguru who wrote (16107)6/2/2016 3:59:49 PM
From: neolibRespond to of 74034
 
I think the multi-die/interposer approach is primarily about performance/power (think wide memory interconnects like HBM) it is not that effective as a cost reduction tool vs doing a SoC.

If AMD is really doing as you say, than I think that reflects the realities of yield at GF. It also explains WHY AMD is starting with the small die/low performance parts first, and the high performance will come later. This is the same approach Intel used with 14nm. The first parts were the smallest die parts, and it has taken them a very long time to get up to the large server CPU parts. AFAIK, Polaris is the first shipping 14nm parts out of GF. They certainly haven't admitted to any other 14nm parts shipping in volume, although GF does claim to have several 14nm clients.

I'm dead certain that TSMC will be yielding WAY better than GF on FinFETs. So 300mm2 vs 200mm2 is likely in Nvidia's favor rather than AMD's. A year or more from now, perhaps not.