SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin
RMBS 92.90-0.5%Jan 12 3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Dave B who wrote (39438)4/9/2000 2:37:00 AM
From: Bilow  Read Replies (4) of 93625
 
Hi Dave B; I never, ever said that RDRAM would never, ever work. I never said that you would never see them in systems under $2000. So I can't answer for the people who said those things.

What I did say was that RDRAM would be plagued by high cost, (not necessarily high price, mind you), difficult manufacturing, reliability issues, and lack of price/performance. All of these things have come to pass.

My guess is that eventually the cost of RIMMs will stabilize at somewhere between 1.5 and 2 times the cost of DIMMs. This is in keeping with other industry estimates for the ratio. Right now RIMMs are way out of line, but I don't think that the ratio will stay that way forever.

A year ago I thought that the above pricing would have come about long before now, at least by the end of last year. It didn't, and that is something that I found surprising.

The high RIMM prices are forcing the development of DDR machines forward a lot faster than they would have otherwise gone on. But even if RIMM pricing had come down to the above ratios, DDR would still take RDRAM out based on cost performance.

I am not at all unreasonable, nor have I seen the posts you are talking about, the ones that stated that RDRAM would never work. Please provide some links to comments on this thread to that effect, I don't think they exist.

Change of subject.

Nvidia is picking up all the awards for hot graphics cards these days, and we really should talk a little about that, and about the evolutionary changes that periodically come over the memory industry.

When RDRAM first came out, it was touted as a solution for graphics problems. This was because of the very high bandwidths a single RDRAM chip could provide. If a designer needs high bandwidth, he can always get it by widening his data bus; the traditional width on graphics boards is 128 bits. But graphics typically doesn't require a lot of memory, so widening the data bus results in the purchase of more memory chips than are required to store the image/data/etc.

For this reason, graphics memories have always been the DRAM niche with the highest bandwidth per chip. Another way of describing this situation is to say that graphics requires high system bandwidths, but low memory granularity or size. RDRAM's high bandwidth per pin naturally tends to raise the bandwidth per chip, so RDRAM was a natural contender in the graphics market, and, in fact, did well in that market.

For those of us who have been in the industry a long time, the above story seems very familiar. SDRAM provided much higher bandwidths per pin than the older DRAM families, and it, too, was first sold into the graphics market. The reason for SDRAM not showing up immediately into the main memory market were similar to the reason that RDRAM had problems, latency. SDRAM, you see, has longer latency than the DRAM it preceded. The reason for the longer latency is simply those registers on the inputs and outputs. Without them, the part provides data a bit earlier. (I should note that the above statement applies to systems in general, and not to a particular system. This is for arcane digital design reasons, and is kind of hard to explain.)

SDRAM moved into main memories quickly because the interface is so similar to DRAM's. This is a fate that VRAM, another graphics memory type, was never able to achieve. The reason VRAM couldn't be used in main memory was because the high bandwidth serial interface wasn't really similar to what system designers wanted to use. It isn't easy to change a chip's memory interface, Intel's current difficulties are an example.

Rambus naturally wanted its memories to get used for main memory as well, but they had to redesign it in order to meet the requirements of those systems. So Direct RDRAM came into being. Direct RDRAM was designed into systems, but it had a high system cost, and provided insufficiently improved performance. People in the memory industry knew what was going on, but Intel pushed the technology, and people had to go along with it, Intel's processors were supposed to dump all other memory types after '99.

Now DDR has taken over the graphics niche and is knocking on the door of main memory. This is exactly what SDRAM did, and RDRAM tried.

So who's to win? The advantage that DDR has is that it is a natural extension to SDRAM. In fact, the MVP3 chip set from quite some time ago
( viatech.com )
originally came out with support of asynchronous DRAM, SDRAM, and DDR DRAM. A little premature, but that's how easy the memory is to control, and that is why the industry is making a bloodless transition from SDRAM to DDR. A lot of people were convinced that RDRAM was going to be the next big memory type, and DDR technology got delayed. Now it is finally making the transition to PCs, just like SDRAM already did.

-- Carl
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext