SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Energy Conversion Devices -- Ignore unavailable to you. Want to Upgrade?


To: Michael Latas who wrote (4211)11/14/1999 2:42:00 PM
From: Ray  Read Replies (2) | Respond to of 8393
 
Retiarius and Michael: Indeed, the memory cell miniaturization is one key advantage of our OUM. I want to bring this into clearer focus by way of examining the articles recently brought to our attention by Retiarius (thanks). Forgive me if I err -- and, even so, I hope the discussion will at least be constructively furthered.

The article

techweb.com

is full of jargon unfamiliar to me -- including the "F" for "feature size". I think, though, that Retiarius is correct in suggesting that this is the more familiar "line width" -- that is, it denotes the process level, 1 micron or 0.2 micron, etc. I think the reason that the memory cell size is described generically in terms of F is that this allows a sharper discussion of the overall state-of-the-art for various manufacturers -- and of the problems they are facing in achieving 1 Gbit DRAM memory chips. Since these manufacturers typically do not make the process equipment (mask making gear, photo resists, stepping motors, etc.), they presumably all can and typically do use equipment with the same level of F (to be competitive, they would all tend to use the best basic equipment). However, each company has its own tricks for the cell design and for the specific process steps for their products; and these things vary enough so that the cell area varies from mfgr to mfgr, A neat way of showing the results of these variations is to express the cell area as C*F^2 (C times F-squared), or CF2 in the jargon of the article. F2 will presumably be the same for all competitively equipped mfgrs, and C for a given mfgr reveals, relatively, how effective his cell design and specific processes are.

The article states that C is now typically 8 (in the lab, I think) and that IBM, for example, claims "it can reduce the cell size to 6F2". While this appears a meager improvement, it actually is important for current DRAM technology since mfgrs are having many difficulties in achieving 1 Gbit chips -- a "last yard" situation. If your C is not good enough, then you need smaller F or have to accept larger cell sizes, both of which mean higher costs. From the article:

"The development comes at a time when DRAM makers
are still uncertain about the development costs and
required process technology needed for gigabit-scale
DRAM chips. Samsung Electronics, for one, has said it
will extend KrF technology down to 0.13-micron
design rules, and has suggested it can apply a tantalum
pentoxide capacitor to become one of the first suppliers
with a production-worthy 1-gigabit device. Others have
said they may have to wait until the 0.11-micron
generation before they can make 1-Gbit DRAM chips
cost effectively."

Note that only about a 20% reduction in F is referred to as a "new generation" event.

The crux of the manufacturing problems is that the memory element for DRAMS is a capacitor, which DOES NOT SCALE with the process size. The capacitor must have an essentially fixed capacitance -- enough to maintain the required signal from the memory cell -- and practical limits are being reached for doing this in the required tiny cells. Long ago (in memory-tech ages), deep pits were being etched into the substrate to allow "tall" capacitors to be formed -- this simply increases the area of the capacitor (capacitance is directly proportional to the area) while using less chip "real estate" (horizontal area). Deeper and deeper pits, as the opening is reduced in size, are eventually impractical, so IBM is using a "trench" design, which I take to be just what it implies -- a "pit" that is long in one direction (curved into an arc or spiral?). Others are building vertically layered capacitors that rise above the substrate and that employ high-dielectric-constant materials between the metal layers to increase the capacitance -- also a difficult and costly process. Both of these approaches, and others, are quite difficult compared to what is needed for fabricating the simple resistive dots of the OUM, which dots not only inherently scale but that also work BETTER (faster) as their size is scaled down (I gave the reasons for this earlier).

Bear in mind that the capacitor is the largest item in present DRAM cells. IBM, for example, now fabricates the transistor inside the walls of the capacitor (meaning that the transistor is necessarily smaller). This is a neat, but also desperate, trick. By contrast, the OUM resistive dot can be roughly as small as the process "F" size -- which is currently about 0.15 micron, I believe. Tyler probably knows what "C" this will yield for OUM, but I see no way we can accurately estimate it. It is certain to be substantially less than 6, however -- making a 1 giga CELL memory chip relatively easy for us (barring possible, though unlikely, process problems). And this means multi-Gbit NV chips due to the multi-level capability of the OUM memory element! Giga-BYTE chips anyone? Just a SIMM or few and you have a large-capacity "electronic disk". Ssslllooowww, fragile, clunky hard drives -- begone you foul beasties!

Unless (1) there is some unforeseen, or unspoken, difficulty in fabricating the OUMs or (2) there is some even better technology soon forthcoming, we have a HUGE WINNER. Tyler certainly believes that neither of these threats is at all likely; and, as nearly as I can tell from public descriptions of the other NV memory approaches (IBM, Hitachi, MMTI), he is right.

BTW, we can easily get the area bit density of the present state-of-art DRAMs. Since one centimeter is 10^4 microns, then the cell density is 10^8/(8*0.15^2), or about 500 Mbits per square cm.

Also BTW, I want to comment on this statement by Retiarius:

"e.e. times discusses now-routine 256Mbit FLASH tech at

eetimes.com

where multilevel (2-bit/4-state) FLASH now comes in at
64M cells per 40 sq. mm. 4-bit-per-cell flash exists
in the labs."

Actually, they were referring to the chip size, 40 mm square, not the cell size. That?s 16 sq cm, 1600 sq mm, or 16*10^8 square microns. So, for say 4 bits per cell, we have 256 Mbits and (256*10^6)/16 is the bit density per sq cm -- 16 Mbit. This is roughly 1000 times less dense than our OUMs can apparently achieve with present process art. And, of course, FLASH memory is slow.

Regards, Ray