SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Ali Chen who wrote (109614)5/8/2000 1:45:00 AM
From: pgerassi   of 1576605
 
Dear Ali:

My PC133 Memory has an access time of 4.6ns. Higher clock rates, allows for the time wasted waiting for the next cycle to be reduced, thus, latency is also reduced. Also, sometimes a cache write is in progress when a cache miss occurs. It takes some time for the write to complete, therefore, if the time to write reduces, read latency decreases. The best work would be a memory system that would signal the CPU that its requested data is valid on the bus soonest (asychronously, but the required circuits are very complicated). All of this is reduced when cache miss rates are lowered. Thus, a FA-LRU Cache would help more than a higher FSB. Remember, most code is used in error handling in the typical program.

Let us take the opposite tack. If Critical word latency is SO important, why are cache lines not one data word wide? FSB would be highly underutilized. Thus, to get high utilization of the FSB, transfer times must be longer than latency or that there can be more than one concurrent memory transaction going at the same time. This is why the SCSI bus on server loads allows much higher throughput than IDE. SCSI gives up a little in latency for critical word to get the higher number of data transfers in flight. In these cases, the tradeoff is more than justified. It is the same with memory systems. For each case where you want lower first word latency, I can come up with a system that works better with additional latency for first word in exchange for more requests being satisfied in a unit of time (many cycles).

Pete
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext