> [A small die strategy] would seem to imply yield problems at GF.
Without more info, I think that is an overly negative view. As far as I understand manufacturing (and that is not far), a small die is good. It improves yields because it counteracts defect density, which increases as you go to smaller nodes. Hence, if you use a shrink to just add more transistors for the same die size, you will have poorer yields and higher cost. Add to that, on a new process you start at the bottom of the yield learning curve. All in all, it makes sense to start off with a small die on a new process.
Interposer technology, an exciting IP in AMDs arsenal, is another driver towards smaller die, even when you want/need more transistors. The ideals for the transistor and routing differ on the various parts of todays SOCs, e.g. the implementation of AMD's current APUs is a compromise better suited for GPUs, not high-performance CPUs. And less demanding parts of the die, e.g. southbridge functionality, could presumably be produced cheaper on older nodes. With multi-die on interposer, each die can be small and use a process and implementation best suited to it.
On the other hand, the links on an interposer will incur penalties relative to a monolithic die, so it is a trade-off as well. And interposer technology is still in its infancy, adding counter-acting difficulty and cost. However, if these can be overcome, it seems the right way to go for SOC and large-transistor-count devices, such as GPUs. |