SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : DELL Bear Thread
DELL 152.40-1.5%Nov 5 3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Gary Wisdom who wrote (2464)3/11/1999 2:25:00 AM
From: Bilow  Read Replies (3) of 2578
 
Hi Gary Wisdom, thanks for responding to my note.

We disagree about what Rambus is designed for. This is an engineering question, and this is the sort of thing I enjoy discussing. You think that Rambus is designed for higher end machines.

Another link illustrating the general industry belief that Rambus belongs in the low end:

Next-gen memory modules ready to roll
Smart Modular Technologies has a foot in both the DDR and Rambus camps, and like many, Smart's Johnston sees Rambus starting life as primarily a PC phenomenon, while DDR SDRAMs address applications in high-end workstations and servers.
techweb.com

You might suppose that engineering professionals who get published in EE-Times know what they're talking about, but it is all to easy to curse a realistic article with bias. So I will try and explain further.

It is true that the Rambus would like you to believe that their technology is the solution for all problems, but this is just the usual bullshit. The fact is that rambus DRAMs have 10% larger dies than SDRAM, and this means an automatic price penalty (before licensing or yield losses) of 10%. High end servers (with 4GB of system RAM) cannot afford the extra memory cost. So rambus parts are only going to be used in systems which can pay the extra costs. On the other hand, rambus supposedly offers higher bandwidths, and so should be worth the premium...

Those of you who have not spent the last 15 years of your life designing high end memory subsystems (as I have) probably hold the common belief that since a single rambus chip is so much faster than a single SDRAM chip, it must therefore follow that a memory system made from rambus chips is therefore that much faster than a memory system made from SDRAM chips. This is true for small memories, (as in desktops) but is completely incorrect in large memory systems. Since this is likely to be a common misconception, I will now spend some time to explain why.

There are two basic principles of bandwidth calculations:

The first basic principle of memory design is that bandwidths can be made to add. In other words, a memory subsystem with two memory chips can be made to have a bandwidth twice that of a design with only one memory chip.

The second basic principle of memory design is that the actual bandwidth limit of a system is going to be its slowest component. In other words, there is no reason to hook a garden hose up to a fire hydrant. Or a fire house up to a kitchen tap. This is sort of similar to the old analog principle of matching impedances...

First, lets analyze some numbers for a small memory system, say 128MB. I'm going to compare RDRAM parts of size 128Mb and 133-MHz SDRAM parts of size 64Mb, but this is only because these are the high end parts on the micron web site:
micron.com

To get 128MB, you need 8 rambus parts. Each provides a peak bandwidth of 1600MB/sec, giving a total peak bandwidth of 12800MB/sec.

The SDRAM system needs 16 64Mb parts, since bandwidth is an issue, we choose the x16 variety, at 133MHz, these 16 parts give a total bandwidth of 133x2x16 = 4256MB/sec.

Note that the rambus bandwidth is substantially higher, as expected.

Now repeat the calculation, but for a 1GB memory size. The respective bandwidths become 102GB/sec and 34GB/sec. The rambus design still has the edge in bandwidth, but in practice, neither of these bandwidths is achievable. Instead, the actual performance will be limited by the CPU(s) bandwidth limit(s).

Assuming a CPU bandwidth of 4GB/sec, and four CPUs, the maximum rate the CPU(s) can access memory is 16GB/sec. Therefore, the rambus solution and the SDRAM solution provide the same performance to the customer in terms of peak bandwidth. But since the rambus solution costs at least 10% more (probably more like 50% more, given the supply crunch), the SDRAM solution will be chosen.

I'm not going to get into the other issues of memory performance (i.e. refresh, latency, precharge and banking), but RDRAM has no special advantage in this area, and EE-Times reports some disadvantages.

I hope this short note explains why industry expects RDRAM to end up in the low end of the market, while SDRAM (DDR or 133MHz) keeps the high end, supporting my contention that Intel is in trouble.

-- Carl

P.S. I have no short positions right now in anything. I don't think this bubble pops until the unemployment rate goes up substantially, and I don't see that happening this year. So don't worry about me setting up a long term short of RMBS or INTC anytime soon.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext