SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Scumbria who wrote (40486)4/20/2000 12:04:00 PM
From: Dave B  Read Replies (2) | Respond to of 93625
 
Scumbria,

It would be a good use of the threads energy to investigate why the 840 latency is so much better than the 820 latency.

The answer may hold the key to Rambus future.


Good point -- I'd sure be interested in feedback from anyone who knows.

More importantly, I hope Intel has determined the reason and, if possible, updates those features in the 820 replacement (I'm blanking out -- Tehama?).

Dave



To: Scumbria who wrote (40486)4/20/2000 12:49:00 PM
From: jim kelley  Read Replies (1) | Respond to of 93625
 
Hmmmm...

If I could get a copy of that nuclear program I could run it on my 840 PC800 workstation and compare it with the results with the 820.



To: Scumbria who wrote (40486)4/20/2000 1:46:00 PM
From: Ali Chen  Read Replies (1) | Respond to of 93625
 
<why the 840 latency is so much better than the 820 latency.>
Here is my speculation on this.

First, it is not clear what their program measures.
As we know, it is not easy to separate effects
of latency and bandwidth at software level.

Second, the single-channel RDRAM is 2-bytes wide,
therefore it requires 16 data transfers to get
a cacheline. This requires 2 read Rambus commands,
tightly packed, to achieve
one back-to-back cacheline transfer. I could be
that this back-to-back is frequently interrupted
by other events and rarely happen in practice.
In two-channel 840 it takes a single COL packet
to read the whole cacheline, which will be never
interrupted, and effective latency is better.
Just a theory, could be wrong of course.

- Ali