SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Pastimes : Computer Learning

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Junkyardawg who wrote (16885)2/26/2001 4:51:38 PM
From: PMS Witch  Read Replies (2) of 110653
 
Benchmark ...

In the old days, cabinetmakers would put a marks on their benches so they could compare the material they're working with at the moment to the sizes of the other pieces. This, they hoped, would enable everything to fit together properly at assembly time.

Horsepower is another kind of benchmark. In early days of engine development, probably steam, most people were very familiar with how much work could be performed by a horse and didn't have even a foggy idea of how much work to expect from an engine. The idea of comparing an engine's ability to a horse's ability was a marketing stroke of genius.

Computer people use the term to describe the process of comparing some new or innovative development against some well known and understood standard. Again, as in the earlier examples, one needs a clearly defined and widely accepted standard to base comparisons.

In early PC days, the standard was the out-of-the-box (8088) IBM PC-XT. In fact, several systems were classified as having the computing ability of ? XTs. A typical 486 would benchmark to be equivalent to 40 XTs.

Manufacturers would learn that great benchmark results often translated into great sales results. When their machines’ published metrics would put them some distance from the top of the heap, they were quick to assign blame to the benchmark, accuse the evaluators of having some bias, and claim that their machines were optimised for ‘real-wold’ duties extending far beyond the capabilities being tested. Still, however, consumers have more faith in third-party evaluations.

To remedy this credibility gap, additional benchmarks were developed. Computers with fast disks would shine in disk intensive tests. Computational tests favoured systems with hardware floating point support. Text manipulation looked good on machines with small word sizes and fast memory. Graphics numbers would be impressive on machines with optimised video cards. Decision makers faced a population explosion of new benchmarks designed to measure every aspect of hardware and software. Today, it seems that whatever you want to measure, a benchmark exists.

This leads to a new problem: What does all the numbers mean and which are important?

The answer to this question varies for each individual, because we all use our systems differently.

The link you provided gave the results of many, many, measurements for one particular system. Without another system to compare the numbers with, they are not overly useful. For example: the system in your link had a hard disk transfer rate of 62, where on my system, the number is 53, a little less. I already know that disk transfer speed is an area where my system is a little weak, so these numbers don’t surprise me. If the laptop in your link returned a value of 120 or 20, there would be some significant issue, either good or bad, worthy of further investigation.

If you post what you hope to learn from the testing, perhaps someone can point us toward the benchmark dealing with the issue, and offer some interpretation of the results.

Cheers, PW.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext