SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : RAMBUS (Nasdaq: RMBS) - THE EAGLE
RMBS 93.38+2.2%Jan 9 9:30 AM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: REH who wrote ()1/10/2000 10:51:00 PM
From: richard surckla  Read Replies (1) of 2039
 
Looking for comments from those with technical knowledge:

Found this on Yahoo:

Latency -vs- Bandwidth
by: s110572
1/10/00 9:20 pm
Msg: 35506 of 35513
Drams need to be refreshed to hold data. The faster the refresh rate, the more expensive they are - that has always been the
case. 50 ns Drams cost far more than the more typical 70 ns (nano-second) drams.
If a cpu is clocking hundreds of millions cycles per second, then, if you do the math, you see it is sitting there 'spinning it's
wheels' waiting for the dram to refresh. They have been getting around this by putting fast, and expensive, cache memories
on the chip. Before that, it was faster than dram SRAM (static ram)(also expensive)as a cache, but off chip. Also, logic built
into the new CPU's use branch prediction, pipelining, and other tricks to load what is anticipated to be needed next into
these cache memories in an attempt to utilize to the max these obscenely fast CPU's, which would otherwise be left waiting.
these methods help, but are less than perfect.
Lower latency would be great - it would allow the CPU to fetch data quicker..not a bad thing.
Bandwidth refers to the amount of data that is transfered once you get over the latency hump, whether it is 20 ns, 30 ns or
100 ns. RMBS, generation 1, has a latency that is about average for mainstream DRAM. However, once the cpu 'opens the
door' to the RDRAM bank, the data floods out compared to SDRAM's relative trickle.
So, instentanious response time using RDRAM is little improved over SDRAM, or even DDR. But sustained, usable delivery
of data goes up over 8 fold.
So, once you click on the play button to see the newest Time Warner HDTV webcast of...say, 'The Matrix'...the SDRAM
or DDR PC and the RDRAM PC will both hang for a fraction while latency soaks up cycles. But once delivery of data to
the CPU has begun, the RDRAM system just opens the floodgates and lets the CPU crunch it's little heart out between
refreshes, while the SDRAM/DDR system requires more tricks, cache stores, fetch predictions, branch predictions,
pipelines, and other general kludges to keep up.
RDRAM makes for much more efficient machine design and operation, with headroom to spare for even more data intensive
tasks.
So, for more money, you can design low latency machines with SDRAM..but for the same money, you can get typical
latency times and damn better bandwidth with RDRAM. The data of the future in rich media....BANDWIDTH intensive.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext