SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: MileHigh who wrote (15391)2/10/1999 4:18:00 PM
From: REH  Read Replies (1) | Respond to of 93625
 
Rambus will exhibit at the Eighth Annual WinHEC, April 7 - 9 in Los Angeles

reh



To: MileHigh who wrote (15391)2/10/1999 5:36:00 PM
From: Tony Viola  Read Replies (5) | Respond to of 93625
 
Milehigh, >>> I have noticed though when reading posts from all over SI from
tech inclined people, that even these posters are all not sure of how RDRAM will
actually perform in PC's.<<<

*****Warning, this is technical.*****

Reading here and around the same places you do, it sounds like RDRAM's claim to fame is bandwidth, specifically bandwidth/pincount ratio. Latency is not it's strength. Right off the top, that's the strength vs. not strength balance you want in DRAM, which makes up the main memory in a computer. You want the best possible latency in your cache(s), which is the first place the CPU always looks for data that it needs. And, the data is there, in the cache, most of the time, say 80% of it. When there is a cache "miss" (the data the CPU needs is not in the cache), then it goes off to the main memory, or DRAM (SDRAM or RDRAM, say).

Now, when the CPU knows that the data was not in the cache(s), it knows it has to wait a while for that slow old DRAM main memory, so it does some other things while it's waiting. What does it do? Well, for example, microprocessors nowadays all have super whizbang features that came from mainframes, like out of order execution. What that is, is that if the CPU sees a delay in the instruction it wants to execute, like maybe because it's waiting for data from the dumb old main memory DRAM, it will switch to doing another instruction or two that are in it's pipeline. Maybe that instruction or two need data that is right there in a register or in the cache. No delay, just do it. Then, when the main memory finally produces the data the CPU needs for that instruction that was put on hold, it goes and executes it. So, the latency "penalty" was not really that big of a deal, since the CPU was able to find some other things to do and very little time was wasted, if any. Now, for the bandwidth thing. How caches and main memories work.

When the CPU goes to the cache (L1 or L2) it gets the word or two it needs to execute the instruction at hand. That's enough because caches are so fast that they can produce data as fast as the CPU needs. Main memory, made of DRAM is another story. It's much slower (latency). So, when the CPU has to go there, because it can't find the data in needs in the cache, while it's at it, it grabs a lot more data than it really needs at that instant. It will take the data it needs right now, plus maybe the next 31 or 63 words higher up the memory address chain. The theory is that if the CPU needs data at say, address 1000, it more than likely will need the data at addresses 1004, 1008, and up. That's because programs generally work on sequential addresses. So, the CPU gets way ahead by grabbing data that it more than likely will need anyway.

So, it took the CPU a relatively long period of time to get to the first data it needed, but it "burst" in (to the cache) a bunch more data that it would have needed to get anyway. That burst speed is the bandwidth. Now, obviously Intel and Rambus think that it is key to get that multiple word transfer (and subsequent ones) over with ASAP, and get all that data into the caches and available to the CPU RIGHT NOW. With 500 MHz CPUs here (PIII) and 600, 700 and 800's on the horizon when Intel gets 0.18 micron online in June, it's easy to see why.

So that's the deal as I see it. Obviously, a technical paper from an Intel or Rambus designer would be far more technical, but I don't think this is too far off.

As far as this: >>>that even these posters are all not sure of how RDRAM will
actually perform in PC's.<<<

You're probably reading the AMD thread. There are some people there who would like nothing better than to have Intel crash and burn. Any alliance partners like Rambus (although AMD has said something about using Rambus DRAM), they wouldn't mind crashing and burning also.

Telecommuting today, got wordy!

Regards,

Tony



To: MileHigh who wrote (15391)2/10/1999 5:39:00 PM
From: Shibumi  Read Replies (3) | Respond to of 93625
 
Please believe that everything I'm about to say I'm saying respectfully. Regarding your posting

>>I am not a techie. I have noticed though when reading posts from all over SI from tech inclined people, that even these posters are all not sure of how RDRAM will actually perform in PC's. My point is that, they all argue about RDRAM -vs- SDRAM as it relates latency, pin count, cost per pin, etc...and not many of them can say definitively how RDRAM will perform in an actual PC.

We will have to wait and see actual performance tests to find out "what is real and what is Memorex". <<

I have to shake my head at this. Everyday on this thread I read posts that claim to know where a stock's price is going in the next hour, day, week, month, year, or decade. People use various components of TA, and for that matter, as far as I can tell, they cut open a chicken and check out its internal organs so that they can tell the future.

Look...I guess I am a techie, because I do understand how microprocessors and DRAM and systems and software work. And here's what I do know. Even though understanding performance is very difficult because it depends on microprocessor speed, cache size, cache line size, DRAM bandwidth and latency, application footprints, and the like -- on average, RDRAM will speed up the computer. Each version of Rambus product, for the next 7-10 years, will do so.

Could I be wrong? Obviously, I could. But I've got to tell you, it's a lot easier to make this judgement than to do what many people claim to do everyday -- predict short-term stock movement. And in fact, it's a lot easier to make this judgement than to even predict intermediate- and long-term stock movement.

This is what underlies my "bet" on Rambus -- a bet I'm increasing week by week on its weakness. I can't predict how most Rambus buyers/sellers are going to feel when they wake up tomorrow morning, but I can predict with reasonable confidence the technology and the competitive situation.

Again, I mean all of this respectfully -- I just wanted to get this off my chest. Sorry about the waste of bandwidth. :)