SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: pgerassi who wrote (75732)3/27/2002 1:22:19 PM
From: Ali ChenRead Replies (1) | Respond to of 275872
 
Dear Pete, "So the ratio is not 1/3 but more like 50 cycles for CPU per cache line fill, 112 cycles for latency and 64 cycles for bandwidth. Bandwidth only is 64/226 or 28%. Latency is 112/226 or about 50%"

Your calculations have very little to do with reality.
The FSB/DRAM may have some requests pipelined, some
requests missing DRAM page, some periods of inactivity
due to no requests from CPU at all. Therefore the
overall time budget vary wildly from application to
application, and varies with core-to-bus clock ratio.
It cannot be calculated solely on the clock-per-clock
basis of an individual transaction.

I believe my 33/33/33% example will be typical for many
modern applications, for CPUs in 2000MHz range. It is
based on analysis technique we discussed two weeks ago,
but did not agree upon:
siliconinvestor.com

-Ali