SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : AMD/INTC/RMBS et ALL -- Ignore unavailable to you. Want to Upgrade?


To: Dan3 who wrote (59)9/18/1999 12:20:00 PM
From: Charles R  Read Replies (1) | Respond to of 271
 
Dan,

<I'm not sure that this is the case. Rambus adds a layer between the memory controller and the DRAM cells due to its need to read from 8 cells, then encode that data for serial transmission and send it across the memory bus. This is not the case for SDRAM in any of its flavors.

Always remember, Rambus is 128 bits wide on chip - the die bloat, power consumption, and latency problems (as well as the high bandwidth) are a result of the need to transmit from this architecture across a 16 bit bus.

Rambus memory is like storing your data on a network server instead of locally. There are performance penalties associated with remote storage. Even if the server has a much faster (and more expensive) disk subsystem than your local machine, programs that are disk i/o intensive are usually faster if done with local storage due to the overhead of transmitting across the network.>

I agree that there is some difference in latencies between these technologies. My point is that that is not significant (second-order would be my choice phrase here).

If that is not the case I could sure use some education. Can you post some info, preferably something that breaks down the latency of RDRAM/PC133/DDR chips to components so that we can better see how much is lost or gained by each mechanism Rambus or the industry has put in place to improve memory technology?

Chuck