SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices -- Ignore unavailable to you. Want to Upgrade?


To: Tenchusatsu who wrote (44727)1/4/1999 2:36:00 AM
From: Craig Freeman  Read Replies (1) | Respond to of 1570952
 
Tenchusatsu, you indirectly confirmed my thoughts: 1) main memory (not cache) does the bulk of the work in a larger server environment and 2) Xeon systems will be decked out better to make them appear faster.

My bet is that, all else being equal, a PII running at 4x112 will outperform a Xeon 450 -- when, as I described, the server is handling a large number of relatively random requests

Craig



To: Tenchusatsu who wrote (44727)1/4/1999 3:30:00 AM
From: Jim McMannis  Read Replies (1) | Respond to of 1570952
 
Can anyone explain this new SECC2 cartridge?
necxdirect.necx.com



To: Tenchusatsu who wrote (44727)1/4/1999 4:45:00 PM
From: Ali Chen  Read Replies (3) | Respond to of 1570952
 
Tenchusatu, <When the multiple CPUs in the server do their
database transactions, they all operate on an enormous amount of data. This might suggest that the memory accesses are so random that no amount of L2 cache can help, but performance tests show that for Xeon, an increase in the L2 cache size does help to increase performance.>

So, are you referring to black magic here? This
passage illustrates that you are sort of off here.
What are you doing in "server chipset" development
at Intel if you do not know your data locality
patterns?

I am not a server performance validator, but I
think I can help you. When a server processes
a transaction, it works with data set dedicated
to this transaction. Once the data get retrieved
and cached, no other processor needs this data
(because other transactions most likely are
completely independent from yours),
and no interprocessor cache contention occurs.
Therefore every processor is fairly independent,
and that's why the SMP servers scale very well
with the number of processors. More L2 on each
processor helps to keep own copy of index tables
and further reduce system data traffic.
That's why more cache on processor helps alot
on servers. Something like that.

For the workstation applications, here is
the famous SPEC results for Xeons with 512k
and most "advanced" 1024k of L2:

Xeon-400MHz SPECint95 SPECfp95 price
-------------------------------------------------
512k L2 16.3 13.2 $780
1024 L2 16.5 13.7 $2,200
-------------------------------------------------
difference (+1.2%) (+3.8%) (+280%)

No wonder the Xeon's systems need to be souped up
("heavily decked") in order to cover this lack of
performance "boost", the boost a customer would expect
from the nice round numbers like doubled L2 cache
for 3X the price. Pity Intel's customers...