<Also, I hope and pray AMD doesn't go down the SOI route that Moto is touting for Apple. x86 is not a niche market and time to market is critical.>
Why? Can you elaborate more?
In my opinion, there is a 10% to perhaps 15% performance gain to be realized from SOI for a given design. At that level, it is not clear that SOI is cost effective. I would argue that design differences are likely to be larger and will determine the superior processor. Moving to SOI requires a large sustained effort on multiple fronts and is non trivial. The device design requires some real understanding to control body charging. The device model must accurately predict this body effect. The circuit model must accurately quantify the memory effect caused by the floating body. Realistically, designs must be designed from the ground up as SOI (not ported from bulk designs). Not many designers are comfortable designing in SOI. Some circuit families behave differently on SOI depending on their sensitivity to threshold voltages in the linear regime. For certain applications body tied devices are still necessary. I could go on. From a processing point of view, there are complications as well. These include different reflectivity (in lithography) due to the buried oxide, more difficult PLY (photo limited yield) characterization, new film measurement algorithms required, different tool charging issues, the fact that the wafers look so damn different under a scope, etc. etc. etc. One of the largest concerns is wafer quality. In my opinion, it is impossible to achieve the same defect density level in the starting SOI wafer as in bulk silicon (either by SIMOX or bonded material) I do think that, with CONSIDERABLE work, you can get close enough so that higher defect density does not preclude using SOI (especially for lower volume high margin parts like for the very high end servers). But to do this, you need an entire organization to either produce the SOI wafers (SIMOX likely only) or buy the bonded wafers (likely thru SOITEK) or SIMOX wafers (likely thru IBIS). Both cases require extensive characterization capabilities and expertise to guarantee the quality of the starting material prior to committing the wafers. You can not depend on the vendors alone. In addition if you decide to produce your own wafers, you must buy and maintain very complicated high dose/current oxygen implanters. And you need millions of wafers per quarter if you want to move SOI into the mainstream x86 market. Again, I could go on. Now, you must do all this and it must ship concurrent with the bulk process to realize the 10% to 15% advantage. If the SOI infrastructure causes say a 6-9 month delay, what has happened to the advantage? It's gone. I think Moto screwed APPLE for just this reason as they apparently want to introduce their G4e at .18um on SOI before "rapidly" moving to .13um) Give me a break. The G4e could have been on bulk silicon probably 8 months ago. In the x86 market, you can't be late. SOI may be appropriate now for low volume very high performance server chips where margins are very high. I hope AMD is considering this application only. Also, it may be appropriate when device scaling has reached it final limits (if there is nothing to replace conventional FETs) as a final move to SOI will get that additional 10% to 15% performance. Or, it could be more appropriate now, if the performance advantage is really 20% to 25% as some have claimed.
all IMHO
THE WATSONYOUTH |