Linux kernel benchmark: Sorry, everyone, but I think too much is being made of this. By all indications, this was a quick and dirty job on a borrowed system, not a definitive test.
When benchmarking, it's SOP to reboot the system between runs.
Well, yes, under ideal circumstances. But not in this particular instance. From newsforge.com , the source cited by Anand:
For this review, the only benchmarks available to me were 2.4.0ac12 compile times; however, they should more than suffice. The first kernel compile I did was a single processor compile of 2.4.0ac12. The kernel was configured with the default options for "make config." You can reproduce this fairly easily by typing "make config" and holding down the enter key for a while. The kernel was then compiled using "time make bzImage." The dual processor results were then done by first doing "make clean" then "time make -j3 bzImage
So, first of all, this doesn't appear to be an ultra-clean test. My guess at what screwed things up is is 'make -j3 bzImage" vs. "make bzImage". Without the -j3, "make" will run single threaded, waiting for each compilation operation to complete before starting the next one. With -j3, it will start and keep running up to three compilations . My guess is that the super-linear improvement comes from running 3 jobs in parallel instead of 2. Because the compilation process is at times i/o bound, there' are spare cycles available to feed an extra compilation job. The correct comparison for multiple cpu speedup would be make -j3 on a single processor vs. make -j3 on a dual system. But that would require being able to disable one of the processors.
The tester is also a little off in calling the non-parallel make "single processor'. It's basically a single task compilation, but you could get some benefit from a second processor through running kernel and user code in parallel, via file system read ahead/write behind. In general, though, this just doesn't bear the level of conjecture that's come up here.
-Win. |