SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Alan Hume who wrote (28955)9/9/1999 9:17:00 AM
From: wily  Respond to of 93625
 
>>IBM sold its discrete memory division recently<<

Alan,

I didn't know that. Earlier this year they were talking about a memory technology (magnetoresistive?) that they were going to "bet the house on". Am I all mixed up?

wily



To: Alan Hume who wrote (28955)9/9/1999 4:16:00 PM
From: Bilow  Read Replies (1) | Respond to of 93625
 
Hi Alan Hume; I believe that on further research you will find embedded DRAM to be more useful than funny.

In fact, embedded is not suitable for those who need really huge memory systems. But not all memory systems are huge. Rambus does not have advantages in the design and manufacture of huge memory systems, which is why you see the server types running away from it, and going to PC133 and DDR.

Memory systems can either be size bound or bandwidth bound (and sometimes latency bound...). If they are size bound, then you have to buy a given number of chips to get the size memory you need, and that number of chips will provide you enough bandwidth, or more than enough. This is the memory system restriction that we are all familiar with, and it is not a happy place for rambus to compete in. Rambus costs more per chip, and uses more power, so it makes for more expensive size bound systems.

Systems that are bandwidth bound, require a given number of chips to provide sufficient bandwidth, and that number of chips will provide a memory size which is big enough, or more than big enough. In the graphics world, the historical problem is bandwidth bound memory systems, rather than size bound. That is why graphics controllers so frequently provide extra pages of display memory. It is there as a free consequence of the memory chips being too tall. If they had wider memory chips (which were as fast, and also cheaper), the engineers would have taken those features out, and provided a cheaper product. Memory chips of previous memory generations are always shorter, but sometimes they cannot be used because either they are more expensive than the newer ones (i.e. they are very obsolete), or they have lower bandwidth, or they may not be available for the full expected lifetime of the product.

So chips that are designed to solve bandwidth problems tend to have higher bandwidth per bit. That is, the chips tend to be "wide" rather than "tall". This distinction is why memory families are almost always available in a variety of widths. I.e. 4Mx16(wide), 8Mx8(medium), and 16Mx4(tall). If you are bandwidth bound, you use the wide x16 part. It has four times the bandwidth of the x4 part. If you are size bound, you use the tall x4 part, as it has 1/4 the loading on the data bus, as well as slightly less power consumption, and fewer PCB routed pins.

Regarding very small, bandwidth bound memory systems. A lot of the time engineers will use a part that has a lot more memory on it than they need, because its the cheapest way of getting the bandwidth they need. The phrase we use to describe this is that "the memory chips are too tall." Another way of putting it, is that engineers like to have DRAM that is available in small chunks. This is a great advantage of RDRAM, and one that is trumpeted on their web site. I don't remember how they explain it, but basically with RDRAM, the user can upgrade memory in smaller chunks. Embedded DRAM is taking the designs that need ultra wide DRAM, as it provides memory efficiently in much smaller chunks, but with much higher bandwidth than either rambus or SGRAM.

Rambus is an extremely wide memory technology. That is, it makes for chips that provide a very high bandwidth compared to their memory size. For this reason, rambus is a natural for graphics memory. In fact, you will find press releases where display controller makers have announced support for rambus. Basically, you can use fewer chips, if you are bandwidth limited, when you use rambus chips.

Embedded is already making inroads into the graphics chip business, and this is going to reduce the number of discrete DRAMs consumed in that industry, so you might consider reading up on it. If you're interested, I'll dredge up some EE-Times articles on the subject.

-- Carl



To: Alan Hume who wrote (28955)9/15/1999 6:29:00 PM
From: Bilow  Read Replies (2) | Respond to of 93625
 
Hi Alan Hume; More stuff on embedded DRAM.

As long as I was looking at that embedded DRAM data sheet, I can answer the question regarding how many 256Mb DRAMs we can fit onto a die. Basically, this is to show limits of technology, not necessarily what is the cheapest or best way of doing things.

Size of 16Mb DRAM memory macro is 20.12 square mm (page 364/382). Maximum die size (page 11/29) is 42.5mm, for a total die area of 1806 square mm. There are therefore room for a total of 1806/20.12 = 89 DRAM macros. This would put a total of 1424Mb on chip. Of course, this figure is unrealistic, as space has to be reserved for I/O and other logic, so we are probably limited to 1Gb, about 4 times larger than RDRAM is due to be next year. In any case, it is possible to put a lot of embedded DRAM on a chip, this makes things smaller and generally better, and eliminates the need for external memory in many high volume applications.
(big pdf) chips.ibm.com

-- Carl