SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices -- Ignore unavailable to you. Want to Upgrade?


To: Dan3 who wrote (120721)7/23/2000 2:42:59 PM
From: Scumbria  Read Replies (1) | Respond to of 1573888
 
Dan,

Which would imply that latency (the achilles heel of Rambus) is more important than throughput (Rambus's strength) when more than one program is running.

The key factor of DRAM performance for a CPU is always latency. This is because CPU's move data in very small chunks.

DRAM bandwidth becomes an issue if the the CPU is accessing DRAM frequently enough to saturate the bus. If that happens, the average latency increases rather rapidly, thereby affecting performance.

Current CPU's do not saturate the bus very often, so high bandwidth DRAM is rather useless. As CPU speeds increase, high bandwidth DRAM may or may not become more important. The reason that I say it may not become more important is because cache sizes are increasing even faster than MHz. Caches reduce the DRAM utilization.

On the other hand, high bandwidth memory is very useful for graphics and network applications. This is because they are strictly bandwidth limited, and latency is not an issue.

Scumbria