To: Joe NYC who wrote (27184 ) 6/8/1998 3:24:00 AM From: Paul Engel Respond to of 33344
Joe - Re: " If anybody understand's this better, please feel free to jump in." "With 2+ CPUs, each one it it's own packaging, each CPU needs to check all of the L2s (if I understand it correctly) before it accesses the DRAM. So more CPU's with their own L2's will generate a lot of trafic on the system bus." For multiprocessor systems with each processor having it's own local cache, "Bus Snooping" is employed. Each processor monitors the external bus when there is external bus activity in which the processor-in-question is not involved. This processor will then be able to spot memory addresses that are contained in its own local cache but which are fetched from or written to main memory by another processor. By snooping the bus, it can spot that data in its internal cache that must be considered "dirty" or "old" - with the most current values for that memory address in main memory or in another CPU's local cache. By flagging/marking the contents of internal cache that are "dirty" data (with entries in the Cache TAG RAM), that same processor will encounter this "flagged"/dirty data during a cache access and will recognize that the contents of that address are no longer valid. A cache "miss" will be generated and new data will be fetched from main memory. For some CPUs, when "snooping" detects that local cache contents may no longer be valid, the cache controller can be used to initiate a "refreshing" operation , essentially replacing the contents of the local "dirty" data with fresh data from main memory when there are idle periods in the bus activity. This may avoid a subsequent cache miss while minimizing the activity on the external bus. As others have noted, the larger the local cache, the less load a CPU will place on the external system bus. Hence, specialized server CPUs will generally have as much cache as a customer can afford. Paul