SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Dave B who wrote (50944)8/23/2000 4:48:14 PM
From: Scumbria  Respond to of 93625
 
Dave,

The hit/miss rates in the Toshiba document are absurd. The author of this section was either severely misinformed or deliberately spreading misinformation:

The number of internal banks a DRAM has is perhaps the biggest factor in determining actual system latency. This is because of the fact that a DRAM can access data much faster if it is located in a bank that has been activated. By activated, we really mean that the data is located in a bank that has been pre-charged. The pre-charged bank can either be the same page (row) that is currently being accessed, or it can be in a bank that is not currently being accessed. If the data is located in a pre-charged bank, we often call this a page hit, meaning the data can be accessed very quickly without a delay penalty of having to close the current page and pre-charge another bank. On the other hand, if the data is in a bank that has not been pre-charged, or in a different row within the bank currently being accessed, a page miss occurs and performance is degraded due to the additional latency of having to pre-charge a bank. The memory controller designer can minimize latency by keeping all unused banks pre-charged. Therefore, more internal DRAM banks increases the probability that the next data accessed will be to an active bank and minimizes latency.

A page hit is most certainly not an access to a precharged bank. Rather it is an access to an activated bank. The author appears to intentionally cloud the distinction. Any memory controller which precharged "unused" banks would be a performance disaster. The fundamental premise of the paper is fatally flawed. Memory controllers keep banks open, to allow for seamless transfer of data between accesses. The penalty for doing so is that a page miss is more expensive.

The memory controller designer can minimize latency by keeping all unused banks pre-charged.

The memory controller designer can be looking for a new job if he implemented something so stupid.

Scumbria