SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : CYRIX / NSM -- Ignore unavailable to you. Want to Upgrade?


To: Joe NYC who wrote (27184)6/8/1998 12:56:00 AM
From: Scumbria  Read Replies (1) | Respond to of 33344
 
RE:"In a multiprocessor system, it would be nice to have 2+ CPUs share one memory controller and one very large L2 cache. With 2+ CPUs, each one it it's own packaging, each CPU needs to check all of the L2s (if I understand it correctly) before it accesses the DRAM. So more CPU's with their own L2's will generate a lot of trafic on the system bus."

Joe,

The reason for having local L2's is to avoid having to access the system bus (and thus DRAM.) That is why Intel is able to charge a huge premium for Xeon processors with large onboard L2 caches.

Scumbria




To: Joe NYC who wrote (27184)6/8/1998 3:24:00 AM
From: Paul Engel  Respond to of 33344
 
Joe - Re: " If anybody understand's this better, please feel free to jump in."

"With 2+ CPUs, each one it it's own packaging, each CPU needs to check all of the L2s (if I understand it correctly) before it accesses the DRAM. So more CPU's with their own L2's will generate a lot of trafic on the system bus."

For multiprocessor systems with each processor having it's own local cache, "Bus Snooping" is employed.

Each processor monitors the external bus when there is external bus activity in which the processor-in-question is not involved.

This processor will then be able to spot memory addresses that are contained in its own local cache but which are fetched from or written to main memory by another processor.

By snooping the bus, it can spot that data in its internal cache that must be considered "dirty" or "old" - with the most current values for that memory address in main memory or in another CPU's local cache.

By flagging/marking the contents of internal cache that are "dirty" data (with entries in the Cache TAG RAM), that same processor will encounter this "flagged"/dirty data during a cache access and will recognize that the contents of that address are no longer valid. A cache "miss" will be generated and new data will be fetched from main memory.

For some CPUs, when "snooping" detects that local cache contents may no longer be valid, the cache controller can be used to initiate a "refreshing" operation , essentially replacing the contents of the local "dirty" data with fresh data from main memory when there are idle periods in the bus activity. This may avoid a subsequent cache miss while minimizing the activity on the external bus.

As others have noted, the larger the local cache, the less load a CPU will place on the external system bus. Hence, specialized server CPUs will generally have as much cache as a customer can afford.

Paul