SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: John Walliker who wrote (75142)6/29/2001 6:15:12 AM
From: Bilow  Read Replies (1) | Respond to of 93625
 
Hi John Walliker; Re: "The benefits and penalties for the use of parity are no different with Rambus memory than any other."

Actually, it's a bit different as a stuck output ruins more than one bit of received data per 64/72-bit word. This makes parity correction, "difficult". In other words, if the output is stuck, or if the mux tree inside the DRAM is stuck, then that DRAM generates 4 bad bits per 64-bit word, and that is not correctable, or even necessarily detectable.

Conventional DRAM, since only one DRAM provides each bit out of a 64/72-bit word has no problem with a stuck bit.

That's why Rambus had to invent the "chip kill" topology for their futile attempt to break into the server market.

-- Carl

P.S. By the way, do you have an answer for this: "Why didn't anyone besides Intel decide to use RDRAM? In other words, why didn't VIA, AMD, SiS, Serverworks, ALi, IBM, Nvidia, ATI, PMCS, Apple, etc., use RDRAM? Why did Intel decide to use DDR, and not stick with RDRAM only?"