SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: Bill Jackson who wrote (71411)2/12/2002 12:06:30 AM
From: tcmayRead Replies (2) | Respond to of 275872
 
Moore's Law and Ovonics: Polynomial scaling versus one-shot shifts

Tim, Well, stave off/continue past the point where feature sizes get so small that leakage forms the density endpoint."

You are using "stave off" in the opposite sense I would have expected. Moore's Law is not seen as any kind of _limit_ to density. Rather, the buzz out of IEDM and ISSCC is that designers will have trouble _keeping up with_ certain density trends predicted in some graphs of Moore's Law (I keep saying "some" because there is of course no single exemplar of Moore's Law, it being just a rule-o-thumb observation).

This is the same sense I assume you mean ovonic-type devices will "continue past." Not the use of "stave off" as in "staving off starvation," but this is a word usage quibble, so I won't say more on this.

However, you raise an important point:

"... The multistate cells of the ovonics methods will allow an increase in density which is tantamout to bypassing Moore's law."

In the theory of computing and algorithms, we refer to things which grow "exponentially" and things which grow "polynomially" or "linearly." A cipher, for example, is harder to break in a way that is exponential in key length. Which makes for some very strong ciphers (so strong that there will probably never be enough computation in the entire universe for all of time to break).

Anyway, device scaling tends to produce devices which are polynomially faster and cheaper, other things being held the same. Area goes as the square, of course, and speeds tend to go linearly, resulting in (very, very roughly) an overall improvement which goes as the cube of the scale factor.
Going from 0.18 micron to 0.09 micron, all other things (design) held constant, should produce 8x the yield x speed. (Of course, voltages are usually lowered with scaling, which cuts the speed-up. There are other issues.)

But going to multi-level states will not benefit from this polynomial scaling. Assuming the number of states is N, and stays relatively constant with scale factor (*), this is a "one-time bump."

(* And my hunch is that the number of states per cell will drop with scaling, as all technologies have noise margins of some sort, and these margins usually deteriorate with scaling.)

You see where the math on this is going. Multi-state is not polynomial scaling...it is not even linear scaling. It's just a one-shot boost, if it works without yield hits, of 2x or 4x or maybe as much as 10x. (But I'll bet there are yield and reliability issues and that the gains, if the technology is used, are in the 2-4x range. Regardless of my bet, the more important point is that the effect doesn't scale and thus is just a "constant" in the polynomial equation.)

Another way to view this point is to imagine that multi-state memory cells had been the norm for many years. All devices would be 1 Gbit RAMS instead of 256 Mbit RAMs, and so on.

Would people still be fretting about the increased leakage and other limits? You bet. Seen this way, multi-state is simply a 1-3 year shift in the time-axis of Moore's Law calculations. Maybe important for the company which gets it out first, assuming there are no other problems, but not important in the larger picture.

--Tim May