SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Semi Equipment Analysis -- Ignore unavailable to you. Want to Upgrade?


To: Elroy who wrote (91604)2/3/2024 3:49:56 PM
From: Sun Tzu1 Recommendation

Recommended By
Elroy

  Read Replies (1) | Respond to of 95530
 
I get that. My engineering background is in semiconductor/chip design.

What you are unwittingly describing is a fundamental shift in the industry which is not sustainable.

For eons the driver behind reducing feature size was the improved cost advantage. Every wafer contains impurities. In the old days, if your chips happened to land where the impurity was, you'd have to throw it away. Today we're more flexible because the purities have improved and also modern chip design allows for redundancies or downgrading the chips. But the core economic driver remains the same - the yield increased proportional to the inverse of the square of the size. So when you went from 1 micron to 100 nanometer, your feature sized was reduced 10x but your yield increased 100x.

You can see then why it was important to keep up with the latest fab equipment because if you didn't your competition would put you out of business due to their cost advantage.

But this is no longer true. You only need the advanced generations to build advanced chips for things like supercomputing. Why? Because the current feature size talk is pure marketing BS. Yes, some minute part of the chip maybe 7nm, but the gates are 16nm and the lanes are 70nm so no way no how a company can achieve exponentially higher yields from the smaller feature size.

So who uses that? nVidia and a few other bleeding edge use it because that is what they need to build their chips, not because that is the most cost-efficient solution. NVDA's revenue is dominated by H100, H200, and all the gear used for supercomputing and AI. Making bleeding edge tech is a legitimate use of bleeding fab processes.

But notice what is missing? Memory chips or auto chips or almost everything else. Why? Because unlike the old days where using a smaller feature size would give you x^2 more yield and 2x more speed, now the savings you get now don't scale to justify the increases in costs.

And this is what almost nobody has come to appreciate yet.

The end is not going to be tomorrow or next year. But unless something changes drastically, the next generation is not going to deliver anything comparable to past generational upgrades.



To: Elroy who wrote (91604)2/4/2024 10:33:29 PM
From: Kirk ©1 Recommendation

Recommended By
Julius Wong

  Respond to of 95530
 
It might be as simple as the power savings gives the ability to pack so much more into a given space that just the cooling costs alone pay for it.... Yes, you can use 100 slower, old chips but they would be further apart and need 100 times the cooling... give or take. So that separation might mean it runs slower just for the extra time to move the electrons.

I don't think it is like fighter jets and high end audio equipment where paying 10x more gives you 10% more (if lucky) performance.