SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices -- Ignore unavailable to you. Want to Upgrade?


To: Ali Chen who wrote (85127)1/5/2000 12:13:00 PM
From: Saturn V  Read Replies (1) | Respond to of 1573927
 
Ref- < And if "adjacent", in which direction? What if the code requires references both forward and backward?
And don't you think that your "implicit prefetch"
will cause significant bus constipation and
further increase memory latency? >

By adjacent I mean both directions i.e next cache line and previous cache line.This operation is not performed if the said cache lines are already in L2.

Yes the implicit prefetch will cause further bus traffic requiring higher bandwidth. The higher bandwidth of RAMBUS can thus be useful.Since most L2 misses will be eliminated, wait states caused by the main memory latency will have been eliminated.

Nothing regarding Willamette has been published. So I am free to speculate, and hope that this automatic prefetch feature will be present on future processors since it will significantly improve performance.