SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : CYRIX / NSM -- Ignore unavailable to you. Want to Upgrade?


To: Scumbria who wrote (27194)6/8/1998 2:24:00 AM
From: Joe NYC  Respond to of 33344
 
Scumbria,

You are right. The approach I was thinking about (with 1 large, shared L2) would have a huge downside, that it could be accessed by only 1 processor at one time.

Still, I think in real world, the multiprocessor system delivers less than people expect. Your typical multiprocessor system used by businesses (as opposed to engineering or scientific applications) is a server - database, file, web. In server applications, the cache hit rate is much lower than in typical apps. Server apps act on huge sets of data sitting in (hopefully) RAM or disk. Even though most of the code might be in L2, the data is very often not in L2

The presence of large onboard caches makes it very unlikely (generally less than 0.2% probability) that any load will generate a bus cycle.

I don't know where you got that number. Are you sure you are not off by factor of 10? 98% hit rate in typical app sounds reasonable. In servers, it's probably more like 80% or less.

Anyway, if you have a server with 8 CPU's and each one needs to check other 7's L2s 20% of time, that could be a lot of overhead.

Joe