>>is it [depreciation] important enough to outweigh the advantages of smaller feature sizes?"
How can you say 'no'? The example of Motorola producing and selling 0.5um+ microcontrollers at under $1 each gives a contrary answer. Please explain if you still stand by your answer.<<
Yes, I still stand by my answer. The reduction of cost per die with decreasing feature sizes is the fundamental engine of growth and technological change in the semiconductor industry.
Let's ignore microprocessors for a moment and look at DRAMs, which are much easier to compare because the only real variable from one chip to the next is density. The computer I'm using now has 40MB of RAM, which cost about $5/MB. The 386 I bought 7 years ago had 16MB of RAM, which cost about $50/MB. Ignoring dumping charges against memory companies for the moment, the reason why memory costs so much less now than it did then is that the chips are cheaper to manufacture. Yet, the fabrication cost per wafer has, if anything, increased, as equipment costs have gone up, consumable costs have gone up, environmental compliance costs have gone up, and so on.
Now, in most industries, that's the end of the story. Costs go up, so prices go up. It takes a certain amount of steel to make an engine block, so an engine gets more expensive if the price of steel goes up. Fully depreciated factories are less expensive than new ones, so you only build a factory if you have to increase capacity.
The semiconductor industry doesn't work that way. In the semiconductor industry, fabrication costs are assessed per *wafer*, but recovered per *chip*. In the case of memory, costs are recovered per megabit of memory in the customer's hands. So, if you can squeeze more megabits into a single wafer, the cost of each one goes down. To squeeze more megabits into a wafer, you have three choices: (a) increase the size of the wafer, (b) decrease the space consumed by each megabit, or (c) increase the number of usable megabits per wafer.
(a) is expensive because you have to completely retool your entire fab to handle bigger wafers. (c) is extremely difficult, because yields are already in the 80-90% range. So (b) is the only alternative. (b) is the alternative that the memory makers picked when they decided to switch from 16 MB DRAMs to 64 MB DRAMS after the 1996 DRAM price crash. Since the market was glutted, they couldn't make money on 16 MB chips, so they decided to switch to 64 MB chips, which would have a higher ASP at roughly equivalent manufacturing cost.
Microcontrollers and microprocessors are more complicated, because it's hard to compare designs. But, holding the design constant, the same logic applies. A 0.25 micron process is cheaper, *per chip* than a 0.35 micron process. Motorola could probably sell those $1 microcontrollers for 50 cents if they used a 0.25 micron process, but they won't because there's no competitive advantage between a $1 microcontroller and a 50 cent one. There's a *huge* competitive advantage between a $200 chip and a $300 chip, so Motorola uses its 0.25 micron process for Power PCs and DSPs instead.
Also complicating the equation for microprocessors is the performance issue. Smaller devices are faster, even if you do nothing else to the design. So, a 0.25 micron process makes more desirable chips, for less money.
Again, yield is the wild card in all of this, as AMD discovered. If you can't get equivalent yield out of your 0.25 micron process, the economics of scale are lost. So yes, a 0.35 micron process might be more economical, but the big reason would be yield and process maturity, not depreciation of the fab.
Katherine |