SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: TechieGuy-alt who wrote (27050)1/31/2001 8:11:27 PM
From: jcholewaRead Replies (1) | Respond to of 275872
 
> If I understand what you are implying (greater cache = lower miss rates etc.), then that is not true. Each
> processor still has 128+256K cache. The hit rate of each processor is still limited by its own cache size.

How about this?

If the data set of an mp-capable program is 640KB wouldn't the single processor system experience *many* misses while the dual processor system experiences no misses? Under certain circumstances, I mean (perhaps ... two threads, each requiring half the memory, so that processor A and processor B won't require data that is in each others' cache).

In this case, the single processor system would be calling to the memory all the time, while the dual processor system would seldom do so.

No? I'm open to correction and reeducation. :)

    -JC



To: TechieGuy-alt who wrote (27050)1/31/2001 8:54:54 PM
From: Joe NYCRespond to of 275872
 
TG,

Scumbria's explanation goes like this (I am paraphrasing): Suppose you have a bunch of processes or threads running at the same time. Let's say 4. If you have them running on a single processor system at the same time in multitasking environment, you have to evict a lot of useful data from the L1 and L2, just to bring in the other thread. The combined cache is 384 for 4 threads.

Now suppose you have a dual processor system running the same 4 threads, 2 on each CPU. Now you have a lot less need for switching contexts, and when you do, you have 384K of cache available for 2 threads, not 4 threads, so less data needs to be evicted when the context changes.

Joe