SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: jcholewa who wrote (27054)1/31/2001 10:53:29 PM
From: TechieGuy-altRead Replies (1) | Respond to of 275872
 
It is quite unusual for multiple threads or processes to share data in such a fine granular fashion.

Mostly, multiple processes operate on completely separate data sets.

But I guess, that as each processor has to service half as many threads, it has to "evict" the caches that many fewer times, and we know that when a cache is being filled from memory the processor core is mostly stalled.

In other words, it's mostly an issue of better cache bandwidth utilization.

[Edit: I just got to Joseph Halalda's response where he basically states the same.]

Maybe a processor designed can shed more light on this issue. I'm really an embedded designed and when we use multiple processors they are usually completely separate from each other, with their own memory, bus etc.

It's much more fun that way (not to mention more deterministic and less complicated).

TG