SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin
RMBS 90.19+2.8%Nov 19 3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Dave B who wrote (18953)4/22/1999 2:14:00 PM
From: Tenchusatsu  Read Replies (4) of 93625
 
To answer your questions one-by-one,

1) Yes, you are correct on every point. The core DRAM technology in RDRAM is similar to PC100 SDRAM. Rambus just adds extra logic to extract 16 bytes at a time, packetize the data, and decode packets coming in. And RDRAM isn't stuck at 1.6 GB/sec. I'm pretty sure future RDRAM technologies will match the peak bandwidth of other memory technologies like DDR SDRAM which has a peak (theoretical) bandwidth of 2.1 GB/sec.

2) Actually, RDRAM does have one reflection point at the memory controller, but Rambus takes advantage of that reflection. The normal amplitude of an electrical wave on a Rambus transmission line is 0.8 volts. When the memory controller sends packets to an RDRAM device, all the signals swing between 1.8 volts and 1.0 volts. But when an RDRAM device sends packets (like read data) back to the memory controller, it only swings the signal between 1.8 volts and 1.4 volts, which is half the normal amplitude. What happens is that at the memory controller, reflection will cause the normal 0.4 volt wave to double to 0.8 volts right at the end-point. It's pretty hard to visualize without graphs, but I can't draw one on these message boards. :-(

3) I don't know how SLDRAM works. This link has a little more detail on it: www4.tomshardware.com . However, I have not heard any news in the past six months regarding SLDRAM. Perhaps SLDRAM is to RDRAM what BEDO DRAM was to SDRAM. BEDO DRAM was regarded as similar to SDRAM, yet SDRAM won out in the marketplace.

4) Actually, the timing of a read is more like:


0-10 nsec: Row packet
20-30 nsec: Column packet
50-60 nsec: Read data


But usually on an Intel system where data is divided up into 32-byte cachelines, the timing of a read looks more like this:


0-10 nsec: Row packet
20-30 nsec: Column packet 1
30-40 nsec: Column packet 2
50-60 nsec: Read data 1
60-70 nsec: Read data 2


There's always 20 nsec between the end of a column packet and the beginning of a corresponding data packet. But you can pipeline requests, so while you're waiting for the first read data to come back, you can immediately send out the next column packet.

5) SDRAM is currently geared toward "bursting" 32 bytes of data, eight at a time. That means for every 32 bytes, you'll need one row signal. I don't know if SDRAM is capable of bursting up to 256 bytes of data. I think you still need one row signal per burst. RDRAM has 64 "dualocts" (Rambus' term for a 16-byte packet) per row. The beauty of it is that if one row is open, you can burst data from that row as many times as you want without needing another time-consuming row packet. Once you need to access a different row in the same bank, then you'll need to "precharge" the bank, then send another row packet.

6) The advantage of having more banks is like you said, less of a need to precharge a bank if you access a different row in the same bank. The worst-case scenario is where all your accesses are targeted toward one bank. This means you have to keep precharging and sending row signals almost every time you want to access that bank, which increases overhead. Having more banks reduces the worst-case scenario, thereby reducing overhead. Also, Rambus mentioned that in SDRAM, if you go from one to two memory modules, you don't double the number of banks; rather, you double the size of each bank. With RDRAM, each memory module adds to the number of banks. That's even better. (Oh, by the way, every RDRAM module has 8 devices, and each device has 16 banks. So you have 128 banks per module. But to throw another monkey wrench into the design, in a single device, you can only have 8 banks open at a time. So in the best case, you can only have up to 64 banks open at one time per RDRAM module. Confused?)

7) For DDR SDRAM, I believe it uses the "master clock" for address and control signals. I don't know if it uses a "master clock" for the double-pumped data or not. It wouldn't make sense, though, because the data rate is just too fast for a common-clock scheme. I think DDR SDRAM would have to use separate clock "strobes" and send them along with the data. These "strobes" would work similarly to RDRAM's clock. In short, I think DDR SDRAM would have to use two clocks. One clock is the "master clock" for address and control signals. The other clock is the set of "strobes" which are sent alongside data in the same direction that the data is traveling.

8) You're correct. The RDRAM transmission lines wave between 1.8 and 1.0 volts.

9) You're correct again. If we wanted to, we could design a chipset where the RDRAM clock is totally disconnected from any other clock in the system, whether it's the processor bus clock, the AGP clock, or the PCI clock. It's just easier to make the RDRAM clock some integral multiple of the processor bus clock. For example, the Camino chipset will have a 133 MHz processor bus. Multiply that clock by three, and you'll have your 400 MHz RDRAM clock. (Then transfer on both edges of that 400 MHz RDRAM clock and you'll have your 800 MHz data rate.) Or, if we wanted to, we could also use some odd ratio of x:2, like 3:2, 5:2, 7:2, etc. You won't see ratios of x:3, x:4, or whatever, since that's a little too weird.

Whew. Any other questions?

Tenchusatsu
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext