SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin
RMBS 95.45+1.3%Dec 15 3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: MileHigh who wrote (15391)2/10/1999 5:36:00 PM
From: Tony Viola  Read Replies (5) of 93625
 
Milehigh, >>> I have noticed though when reading posts from all over SI from
tech inclined people, that even these posters are all not sure of how RDRAM will
actually perform in PC's.<<<

*****Warning, this is technical.*****

Reading here and around the same places you do, it sounds like RDRAM's claim to fame is bandwidth, specifically bandwidth/pincount ratio. Latency is not it's strength. Right off the top, that's the strength vs. not strength balance you want in DRAM, which makes up the main memory in a computer. You want the best possible latency in your cache(s), which is the first place the CPU always looks for data that it needs. And, the data is there, in the cache, most of the time, say 80% of it. When there is a cache "miss" (the data the CPU needs is not in the cache), then it goes off to the main memory, or DRAM (SDRAM or RDRAM, say).

Now, when the CPU knows that the data was not in the cache(s), it knows it has to wait a while for that slow old DRAM main memory, so it does some other things while it's waiting. What does it do? Well, for example, microprocessors nowadays all have super whizbang features that came from mainframes, like out of order execution. What that is, is that if the CPU sees a delay in the instruction it wants to execute, like maybe because it's waiting for data from the dumb old main memory DRAM, it will switch to doing another instruction or two that are in it's pipeline. Maybe that instruction or two need data that is right there in a register or in the cache. No delay, just do it. Then, when the main memory finally produces the data the CPU needs for that instruction that was put on hold, it goes and executes it. So, the latency "penalty" was not really that big of a deal, since the CPU was able to find some other things to do and very little time was wasted, if any. Now, for the bandwidth thing. How caches and main memories work.

When the CPU goes to the cache (L1 or L2) it gets the word or two it needs to execute the instruction at hand. That's enough because caches are so fast that they can produce data as fast as the CPU needs. Main memory, made of DRAM is another story. It's much slower (latency). So, when the CPU has to go there, because it can't find the data in needs in the cache, while it's at it, it grabs a lot more data than it really needs at that instant. It will take the data it needs right now, plus maybe the next 31 or 63 words higher up the memory address chain. The theory is that if the CPU needs data at say, address 1000, it more than likely will need the data at addresses 1004, 1008, and up. That's because programs generally work on sequential addresses. So, the CPU gets way ahead by grabbing data that it more than likely will need anyway.

So, it took the CPU a relatively long period of time to get to the first data it needed, but it "burst" in (to the cache) a bunch more data that it would have needed to get anyway. That burst speed is the bandwidth. Now, obviously Intel and Rambus think that it is key to get that multiple word transfer (and subsequent ones) over with ASAP, and get all that data into the caches and available to the CPU RIGHT NOW. With 500 MHz CPUs here (PIII) and 600, 700 and 800's on the horizon when Intel gets 0.18 micron online in June, it's easy to see why.

So that's the deal as I see it. Obviously, a technical paper from an Intel or Rambus designer would be far more technical, but I don't think this is too far off.

As far as this: >>>that even these posters are all not sure of how RDRAM will
actually perform in PC's.<<<

You're probably reading the AMD thread. There are some people there who would like nothing better than to have Intel crash and burn. Any alliance partners like Rambus (although AMD has said something about using Rambus DRAM), they wouldn't mind crashing and burning also.

Telecommuting today, got wordy!

Regards,

Tony
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext