SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Bilow who wrote (40460)4/20/2000 6:53:00 AM
From: Bilow  Read Replies (3) | Respond to of 93625
 
Hi all; An addendum to the note on latency in i840 systems. A relevant link is:
developer.intel.com , pages 123 and 127.

Cache lines are 32 bytes only, but AGP PIPE# and SBA accesses can apparently go to 256 bytes. My guess is that the Intel bandwidth versus latency charts assume a lot of AGP activity, activity that increases the size of the typical transaction. Of course this is not my specialty, perhaps one of the Intel chipset experts will comment.

-- Carl

P.S. I should note that the "perfectly efficient" comments in the previous note require that the read and write accesses go to different banks, or to the same row if on the same bank. By perfectly efficient, I mean that there are no lost cycles due to the bus being turned around, or due to a collision on the control bus. By the way, VCRAM (sp?, it's getting late) takes a stab at reducing this efficiency caveat by saving multiple rows in each bank.



To: Bilow who wrote (40460)4/20/2000 9:44:00 AM
From: jim kelley  Respond to of 93625
 
Re: "RDRAM does provide a better solution for these problems"

Thank you Carl. Now we have you on record as supporting RDRAM except for its price.

The price is dropping now as the supply increase.

:)



To: Bilow who wrote (40460)4/20/2000 11:09:00 AM
From: Ali Chen  Respond to of 93625
 
Bilow, <one has to know the probability of a read access being followed by a write access and vice-versa, as opposed to a write being followed by a write or a read by a read>
and your conclusion:
<RDRAM does provide a better solution for these problems>

I feel that you might be wrong. There is no need in
such a heavy theoretical backup:)

Let's keep it simple: the probability depends on
workload; the workload depends on Windows libraries
and Microsoft VC++; The latter two are a given reality.
Now, wrap this reality into a benchmark, and you
will see (Tomshardware.com) that for equal FSB
(as jim noted, FSB is a part of latency) the results
do not support your conclusion about RDRAM superiority.
In reality, with no need for probability.

There might be some gain in heavy multistream set-up,
but it might be due to temporal bandwidth headroom.
I have serious doubts that the Ram-bus can ever be
utilized to the extent it's creators were dreaming of,
just due to nature of cache coherency maintenance.

BTW, those latency charts are a good marketing BS.
There is no way to measure the actual parameter
by a benchmark on a highly-pipelined processor with
dozens of instructions in flight. It can be done
only under a good model of everything: CPU, caches, FSB,
bridge, and memory. I am not sure Intel has all
this stuff together. And again: which workload
traces to use?

<It is all kind of complicated.>
Yes it is, you are damn right here.