SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Dan3 who wrote (47138)7/13/2000 12:48:50 PM
From: Tenchusatsu  Read Replies (2) | Respond to of 93625
 
Dan, <the delays inherent in the serialization process used by Rambus will make it less and less competitive as system speeds increase. Rambus chips have to read 8 cells for each data line, packetize the data, then send it. There is no way for them to compete with chips that just read the cells and send.>

There isn't much of a difference in latency between sending all of the bits at once, and packetizing the data. I'm sorry, but you're just repeating the anti-Rambus FUD that has been an exaggeration from day one. Packetizing the data does not increase latency as much as you might imagine.

By the way, if packetizing the data is such a latency hog, why is AMD going with a network of narrow P2P connections in their future LDT technology? You know that each narrow LDT port will require packetizing the data, right? But oh my God, packetizing data is a no-no because of the incredible latencies! What is AMD thinking?

<They can double the clock speed by serializing the data but the additional control and ground lines needed end up doubling the width of the traces on the board so there is no net gain in data rate per cm of board used by the traces>

What data do you have to back this up? Yes, RDRAM takes up slightly more width than a PC100 SDRAM channel because of the wider traces and the additional ground wires. (Thanks Carl.) But then again, PC100 SDRAM has been around for a long time, and it is relatively easy to implement compared to DDR. And DDR is the real competitor to RDRAM. Let's see just how much real estate DDR takes up on the motherboard. I'll bet it's almost twice as much as RDRAM.

<Dual DDR channels will fill the 256 bit cache line of coppermine in a two memory bus clocks, with the first 32 bytes ready in a half memory bus clock. ... Dual PC800 rambus takes a flat 40 clocks to read the cells and put the packet together then another 1.5 clocks to put the first 4 bytes into the cpu, with 10.5 more needed to fill the line for a total of 52 clocks to fill a cache line.>

Geez, where do you come up with this nonsense? Dan, why should it take PC800 RDRAM 40 clocks to read the cells? It's all the same DRAM inside the memory cells, no matter whether it's PC100 or DDR or RDRAM. RDRAM, like DDR, reads an entire row in the DRAM array, then uses column addresses to pick and choose the data chunks. It doesn't take 40 clocks for RDRAM, nor is the internal access magically instant for DDR.

Second, once the data is obtained from the memory channel, it still must travel over the front side bus. That's why your original argument of "256-bit wide Coppermine L2 bus" really makes no sense, because no matter whether it's DDR or RDRAM, data still has to funnel through the FSB before it gets to L2.

Third, where do you get the idea that it takes "1.5 clocks to put the first 4 bytes into the cpu, with 10.5 more needed to fill the line"? You're talking more nonsense here. Once again, data has to funnel through a FSB. The FSB is 64 bits wide, so eight bytes are transferred at a time, not four like you assume.

In conclusion, Dan, I think you'd be better off arguing other points against RDRAM than its technical merits. (Or at least leave the technical arguments to Bilow Carl.) You simply do not know what the hell you are talking about. Period.

Tenchusatsu