SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: Ali Chen who wrote (227198)3/2/2007 12:18:37 AM
From: fastpathguruRead Replies (1) | Respond to of 275872
 
Now, suppose that a Barcelona-based server got better SPECrate scores at your stretch, and the owner of your shop went ahead and bought it, 4 cores 1 chip server.

'Kay...

However, since individual Core-2 workstations outperform AMD offerings by 30-50% (peak or not peak, whatever), all employees got core-2s desktops.

<doing doubletake> Wha?

But you just said Barcelona got better SPECrate scores. Barcelona is a Socket F part, and is used in servers and workstations.

theinquirer.net

Since in your scenario A) Barcelona is faster at SPECrate, B) purchasing decisions are based on SPECrate, and C) Barcelona is a workstation/server chip, the workstations will be Barcelona-based.

(Unless you want to start adding artificial conditions to your scenario, merely to satisfy your argument...)

fpg



To: Ali Chen who wrote (227198)3/2/2007 12:50:56 AM
From: pgerassiRead Replies (1) | Respond to of 275872
 
Dear Ali:

At that shop C2D was tested and found wanting. It did terrible with that shop's server workloads. But you think its so great that it can power over its own architecture limitations. That's why you are dreaming. People keep telling you that single task single thread benchmarks are not applicable to servers. But you live in a fantasy world where thats the only way to run a server. Thats the way you run a workstation so it must be the way all workstations are run.

Sorry that is not true. Windows XP and before couldn't really use more than 3GB in any single thread. DOS couldn't use even more than 1MB as its a 16 bit OS. There were extenders that pushed it to 16MB, but Microsoft pushed one to Windows. Bottom entry level PCs come with 256MB or more. There is not one shop I know of that runs workstations on DOS. Get real!

Standard VAR turnkey servers 10 years ago had 10GB or more running RDBMS based applications. It was the minimum for a 100GB data set. Hundreds of users would be on a single server. And here you think that just one server runs for each task and it has only one thread running everything.

Here is a server running a single instance of Oracle for an ERP application:


UID PID PPID C STIME TTY TIME CMD
oracle 1307 1 12 03:01 ? 60:54:27 ora_pmon_orcl
oracle 1309 1 3 03:01 ? 12:15:41 ora_dbw0_orcl
oracle 1311 1 2 03:01 ? 08:21:12 ora_lgwr_orcl
oracle 1313 1 18 03:01 ? 57:38:07 ora_ckpt_orcl
oracle 1315 1 2 03:01 ? 09:06:52 ora_smon_orcl
oracle 1317 1 0 03:01 ? 01:05:38 ora_reco_orcl


Notice thats six threads to run one instance of Oracle for one simple data set on Red Hat Advanced Server 3.0 (I know its a few years old (the old adage, it works so don't upgrade it). Of course its being hit hard at the time this was run (3:34PM) as many users are on the ERP package and those are 2 to 5 threads per user running X terminals in either Linux or Windows. Total threads (processes) are 248 on that server at the above time.

This should blow your dream of a single base thread running on each server.

Since real world environments and tests do not fly for you, you must belong the the ivory tower set who looks at how good things are on paper while dismissing real world tests that contradict their POV. So many FUBARs are caused by that set. They state that is impossible for the Bumblebee to fly ignoring the trillions that fly just fine.

BTW your case is not practical. Having C2D desktops don't make the server any faster. A business with an existing 2 or 4 socket F Opteron server can simply drop in 2 or 4 Barcelonas.

Cloverton didn't work that way as many existing MBs couldn't support Cloverton. Besides not many bought two socket servers with FBDIMMs as the P4 performance sucked. And no multisocket RDDR2 MB supported Woodcrest or Cloverton and there doesn't exist a 4 socket version of C2D. So people would have to buy brand new servers with special high cost memory and new disks. That is far more expensive than dropping in 2 or 4 new QC CPUs. It is better to spend $3 to 10K on the server than on 50 C2D PCs. The latter don't help get the job done much quicker. $10K to upgrade the CPUs on the server which takes most technicians less than an hour. And boom, the jobs get 20-100% faster.

The idiot that upgraded the PCs thinking the server jobs will get faster got little benefit for $10+K spent on the PCs that run glorified terminal (client) software. The smart man spent $3-10K and got the work out 20-100% faster and could use the very same PCs he already had. I'd fire the first guy and give the latter a raise. How fast a PC do you need if all you do is run a terminal to the server? Even including the light office type tasks. You get a reasonable cheap energy efficient integrated PC for that. You might spend more for the monitor, KB and mouse because your users are at their desks most of the day. And good ones last over many PC upgrades.

Pete



To: Ali Chen who wrote (227198)3/2/2007 2:40:10 AM
From: PetzRead Replies (2) | Respond to of 275872
 
re: However, since individual Core-2 workstations outperform AMD offerings by 30-50% (peak or not peak, whatever), all employees got core-2s desktops.

And what is the basis for your premise, given the following statement by AMD:

The quad-core chip also will outperform AMD's current dual-core Opterons on "floating point" mathematical calculations by a factor of 3.6 at the same clock rate, he said.

Does C2D or even Clovertown do that?

Maybe C2D is "fast enough?"

Petz