WBMW,
re: Standard system benchmarks
1) How do you choose the benchmarks? AMD would want ones that benefit them like Serious Sam for gaming, or PC Worldmark for business apps, while Intel will push for games like Quake III or Comanche 4, and business benchmarks like SysMark 2002.
That's why it's independent, not only from the component manufacturers, but also from the OEM's. The independent testing agency makes the decision on benchmarks.
2) Let's assume that everyone were to agree to use both kinds of benchmarks, "to even things out". What happens when new versions of these benchmarks come out, and the results are very different? Are you going to keep the benchmarks static and prevent new applications from being part of the performance average, or would you use the new benchmarks, and retroactively alter all previous results? The third option would be to begin using a different ranking system as soon as new benchmarks become available, but that would prevent current systems from correlating with results from older systems, so there would be now good way to compare.
It's the engineer in your thinking. Of course they couldn't be absolutly precise, but they would continue to test the latest and greatest. Would there have to be some level of discretion in their rating system, probably.
So what do you do? Do you force an OEM to run an exhaustive set of benchmarks on every system they want to advertise? Some OEMs have dozens of PC models, each with different processors, video adapters, and memory configurations. Many OEMs even let the customer build their own configuration online. What do you do then? Benchmark every permutation in advance? Certainly there would be no ROI in that!
The OEM's submit the units to the testing agency, they run the tests, give them the ratings. The testing agency probably bills the OEM to cover costs, but it's non-profit.
So where would a benchmarking system be worth it? As far as I can think, retail would be the only place, and as far as retail goes, people generally don't know the difference between 10% or 20% in performance, nor do they need all the performance that they buy. That's why the "sweet spot" in retail has generally moved down to <$800 boxes. People aren't buying performance, so why would you need a benchmark that highlights performance advantages?
All the OEM's, including Dell, have "standard" configurations. Those are the ones you test and rate. If the end user, corporate or consumer, wants to alter the standard config, they can. If an OEM wants to submit different configs to the rating agency, they can. My guess would be that they would rate anything that they advertise.
"Make-everybody-happy" benchmark suite
It wouldn't be a "benchmark suite", it would be an independent rating agency.
I don't see the industry being eager to take on such a task.
There we agree. Too much invested in the status quo.
I'm just speaking for the typical end user, who now has a jumble of measurements that mean absolutly nothing. The only way to make a rating system meaningful, would be to independently rate systems. It wouldn't be perfect, and it wouldn't be precise. But it would have some relationship to reality.
And this is just off the top of my head, I haven't given it that much thought. Obviously better minds than mine would have to devise a good, working scheme for the tests and the scales.
Anything would be better than what we have now...
John |