SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Tenchusatsu who wrote (47181)7/13/2000 8:15:08 PM
From: Dan3  Read Replies (3) | Respond to of 93625
 
Re: There isn't much of a difference in latency between sending all of the bits at once, and packetizing the data.

Well, we could do a yes there is, no there isn't for while, but I'll explain it to you instead. I used 1GHZ as an example frequency, that makes it easy to exchange ns and CPU clocks - maybe I should have been more explicit about that. Take a look at a data sheet for Rambus micron.com , for example - notice that it comes in several grades for latency under a best case, ready mode chip. 40, 45, and 50 ns. That's where the 40 clocks comes from.

Now look at a data sheet for DDR micron.com - go to the bookmark for read latency. There is one 7.5 ns clock for Row select then (cas 2 @ 133) 15 ns for a total of 22.5 ns or clocks before data is available.
As you say, they all use the same basic cells, if the delay isn't due to serializing, then what is causing it? Because the delay is there.

What data do you have to back this up? Yes, RDRAM takes up slightly more width than a PC100 SDRAM channel because of the wider traces and the additional ground wires. (Thanks Carl.) But then again, PC100 SDRAM has been around for a long time, and it is relatively easy to implement compared to DDR. And DDR is the real competitor to RDRAM. Let's see just how much real estate DDR takes up on the motherboard. I'll bet it's almost twice as much as RDRAM.

Well, you already said it. Carl's analysis is at Message 13853059 . DDR 266 runs at the same 133MHZ frequency as PC133 - not the 400MHZ of Rambus. There is no obvious reason why DDR should be more difficult to implement or require more board space than other 133MHZ traces. Your claim that it will require twice the space of other 133MHZ runs makes no sense - why should it?

Geez, where do you come up with this nonsense? Dan, why should it take PC800 RDRAM 40 clocks to read the cells? It's all the same DRAM inside the memory cells, no matter whether it's PC100 or DDR or RDRAM. RDRAM, like DDR, reads an entire row in the DRAM array, then uses column addresses to pick and choose the data chunks. It doesn't take 40 clocks for RDRAM, nor is the internal access magically instant for DDR.

See above - I'm talking cpu clocks and the 40 is taken straight from the Rambus specification. If you have a problem with those numbers then take it up with Rambus!

Second, once the data is obtained from the memory channel, it still must travel over the front side bus. That's why your original argument of "256-bit wide Coppermine L2 bus" really makes no sense, because no matter whether it's DDR or RDRAM, data still has to funnel through the FSB before it gets to L2.

Wide memory channels fill wide cache lines in fewer bus cycles.

Third, where do you get the idea that it takes "1.5 clocks to put the first 4 bytes into the cpu, with 10.5 more needed to fill the line"? You're talking more nonsense here. Once again, data has to funnel through a FSB. The FSB is 64 bits wide, so eight bytes are transferred at a time, not four like you assume.

At 800MHZ, dual channel rambus moves 4 bytes in each memory bus cycle - it's actually 1.24 clocks, but I've never read of better than half cycle latching, so I called it 1.5. I suppose that some chipset/CPUs wait for an entire cache line to be filled before accepting data (and ending the stall that led to the request in the first place) but my understanding was that current Intel and AMD systems were brighter than that. Am I right or wrong?

In conclusion, Dan, I think you'd be better off arguing other points against RDRAM than its technical merits. (Or at least leave the technical arguments to Bilow Carl.) You simply do not know what the hell you are talking about. Period.

Flattery will get you nowhere!

Dan

By the way, if packetizing the data is such a latency hog, why is AMD going with a network of narrow P2P connections in their future LDT technology? You know that each narrow LDT port will require packetizing the data, right? But oh my God, packetizing data is a no-no because of the incredible latencies! What is AMD thinking?

And they use steel or plastic pipes to move drinking water from treatment plants into houses, what's that got to do with moving data between main memory and CPU?



To: Tenchusatsu who wrote (47181)7/15/2000 5:04:19 PM
From: Bilow  Read Replies (1) | Respond to of 93625
 
Hi Tenchusatsu; I see that Dan3 has already admirably replied to this post, but I thought I would add a few comments...

You wrote: "There isn't much of a difference in latency between sending all of the bits at once, and packetizing the data."

At 800MHz, the difference between 64-bits at once, and 4x 16-bits is 3 800MHz clocks or 3.75ns. I believe we can agree on these numbers, the question is whether 3.75ns is much of a difference... With a processor running at 1.6GHz, that is two clocks. How many instructions do the new machines execute per clock? It is obvious that 3.75ns added to every memory read is a substantial latency difference, surely enough to be visible in modern computers.

What's most important, the latency issue, in terms of missed instruction cycles, gets worse and worse as we go to higher speed processors, just the territory that Rambus tells us will be the promised land for RDRAM performance heaven. (And by the way, I thought that the promised land started at 500MHz, not 1.6GHz, or at least that is what Rambus was saying 5 years ago.

Funny thing about those religious leaders, their predictions always seem to be postponing themselves out into the future.

-- Carl