SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC)
INTC 36.44-1.1%3:43 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Road Walker who wrote (169748)8/22/2002 12:09:08 PM
From: wanna_bmw  Read Replies (2) of 186894
 
John, Re: "Well, you could have one average number for business apps, one average for games, etc. And then you could average those for an overall system performance rating. The consumer could make his decision based on his expected use of the system."

I see a few issues here.

1) How do you choose the benchmarks? AMD would want ones that benefit them like Serious Sam for gaming, or PC Worldmark for business apps, while Intel will push for games like Quake III or Comanche 4, and business benchmarks like SysMark 2002.

2) Let's assume that everyone were to agree to use both kinds of benchmarks, "to even things out". What happens when new versions of these benchmarks come out, and the results are very different? Are you going to keep the benchmarks static and prevent new applications from being part of the performance average, or would you use the new benchmarks, and retroactively alter all previous results? The third option would be to begin using a different ranking system as soon as new benchmarks become available, but that would prevent current systems from correlating with results from older systems, so there would be now good way to compare.

3) Now, let's say there could be a solution to that some time in the future, and a perfect, static set of benchmarks could be used that everyone agreed was fair to compare between systems (I'll try not to bring up the fact that this was how SPEC was started, since that will just rat-hole the entire argument <g>). But, let's say it happened.

As you have already mentioned, differences on the system level can really alter performance, sometimes more so than the CPU. With new video adapters coming out every few months, and new memory standards that also play a large part in performance, you really need a system level benchmark, because the CPU will only tell a part of the story.

So what do you do? Do you force an OEM to run an exhaustive set of benchmarks on every system they want to advertise? Some OEMs have dozens of PC models, each with different processors, video adapters, and memory configurations. Many OEMs even let the customer build their own configuration online. What do you do then? Benchmark every permutation in advance? Certainly there would be no ROI in that!

So where would a benchmarking system be worth it? As far as I can think, retail would be the only place, and as far as retail goes, people generally don't know the difference between 10% or 20% in performance, nor do they need all the performance that they buy. That's why the "sweet spot" in retail has generally moved down to <$800 boxes. People aren't buying performance, so why would you need a benchmark that highlights performance advantages?

Given the issues with a generalized, "Make-everybody-happy" benchmark suite, I don't see the industry being eager to take on such a task. Why do you think you haven't heard much from AMD's TPI (True Performance Initiative)? It's probably because executing to their charter is harder work that it sounds on PowerPoint slides.

wbmw
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext