SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: jim kelley who wrote (110938)5/15/2000 1:09:00 PM
From: pgerassi  Read Replies (1) of 1577789
 
Dear Jim:

I read over the editorial. There are glaring holes in his "Debunks". First. You can get more out of SDRAM without a large increase in pins. This is called interleaving. You connect the data pins together and skew the clocks given to each SDRAM DIMM. Doing this to pairs of DIMMS, you get PC266 from paired PC133, where on the rising edge of the base clock, you get DIMM ones data is valid, on the falling edge (rising edge of the skewed clock), you get DIMM twos data. This is effectively DDR. Interleaving 4 DIMMS gets you PC533. Usually this takes buffering on the DIMMS and tight control of trace characteristics to get this to operate reliably. This is why workstations and servers usually required buffered DIMMS.

Now when you increase width, you do not double all the pins, just the data pins plus one or two additional control pins at most. Thus 2 wide DIMMs only add 64 pins to the memory controller. And 4 wide adds 192 pins. 4 wide and 4 interleaved gives you 16 times the bandwidth. If you start with PC133, you get 16.96GB/sec far higher than any RAMBUS currently designed. This is 10 times PC800. Ten PC800 would use 290 pins where the above would use 278 pins. NOw who has the lower pin count? Most SDRAM DIMMS controllers have 4 DIMMS to a trace, thus the above is quite usable and not out of the ordinary.

Second, power usage, if one RAMBUS chip is referenced constantly (quite probable with current working sets being less than 16MB (size of RAMBUS chip)), is far greater than a SDRAM chip of same size. Engineers must design for worst case loads. That one chip, uses 4 times the power of SDRAM chip (maximum and this assumes that SDRAM is like all others). That higher thermal density, having to get the four times the heat off the same area, requires a heat sink not because the heat needs to be spread, but because heat needs to be removed.

Three, it costs more due to the way RDRAM is tested. Due to the high heat of the chip, full die testing requires the heat sink. The heat sink is not in place until the RDRAM dies are assembled into a RIMM. If one chip does not meet spec, the whole RIMM is wasted. Thus, if a RAMBUS die yields 95% good after some minor die testing, the overall yield of a module is 95% ^ 8 * RIMM Board yield * RIMM Assembly Good yield or about 60%. If yields are 90%, you get about 30% RIMM yield. I believe that individual RDRAM chips are within 2-5% of SDRAM yields, but RIMM yields are much worse than SDRAM DIMM yields (around 99%) due to full chip testing or "Known Good Die". A Yield of 60% would double the price, and a yield of 30% would quadruple the price. Even if 95% were correct, a 256MB RIMM uses 16 chips (die) and 95% ^ 16 is 30% again. RAMBUS costs are only reduced, if single dies can be more fully tested. The cost differential is due to the RAMBUS design of RIMMS. The heat appears to be concentrated in the I/O portion of the die. Most likely the high speed drivers and control logic. On buffered DIMMs, this is regulated to driver ICs on those DIMMs.

Four, for embedded applications high width (x 32) SDRAMs are used where performance is not necessary and DDR where it is (see most high end 3D cards). Here, the cost / benefit analysis is more of a "Wash" and depends on what is desired more.

Pete
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext