wanna, re: <Then again, I also agree with others on this board that think that arbitrary model numbers are a bad idea, simply because of this fact that they are arbitrary, and not based on any concrete standard of measurement.>
This *STUFF* is all arbitrary(IMO). Think of how *LITTLE* meaning the CPU performance has in the scheme of things.
By the time you shift MOBOS, HDDs, Memory, settings, Video, etc., then pick your test software, the tester can move the performance comparison results to a large degree.
And what should be the measure of performance, anyway? The more distinct the test the farther away, in general, you get from 'real world', whatever that is.
In my world it's large databases, GUI development in GUIs, and web apps. Others live in gameland. Some seem to want to run their screen saver as fast as they can[Why else would you optimize SETI?]. Whatever!
In how much of what you do could you tell the difference, in accomplishment, over the course of an hour or day, between a PIII-500 and a PIII-1G?
Can you notice it in web browsing? Word processing?
I can certainly tell the difference when I'm compiling or running a GUI software development environment, but what % of the market does that *KIND* of stuff?
When I'm looking at database & web servers I can see it, but a lot can be said in the parallel cheap hardware vs single box fast hardware argument.
And the Athlon/PIV generation is almost over.
tgpntdr |