To: Reginald Middleton who wrote (23562 ) 11/15/1999 5:32:00 PM From: Bearded One Respond to of 24154
Right, Reggie. Rather than answer this myself, I'll simply include a portion of a Slashdot discussion from someone who explains what's going on in a detailed manner. If you can get through this, you might actually learn something: Re:We must concede... (Score:2) by dennisp (root@darkpower.net) on Monday November 15, @02:06PM EST (#349) .... Are these benchmarks useful in real-world situations? Well, yes and no. First we must realize that there is a very limited client base for systems serving files over quad 100mbps ethernet adapters (or even gigabit ethernet). Think of how many companies even need this much bandwidth. There are some that do -- but then we realize that there is no major advantage in using a single monster machine to do this task. If a company can afford that much bandwidth, then they can also afford the rack space for a cluster of servers. Even if the bandwidth is in house, it just makes more sense to scale with clusters. The NT camp can argue that at the high end it is more useful -- but a cluster of FreeBSD or Linux machines running apache are a heck of a lot cheaper. As well, try hosting a large amount of IP based domains on IIS. You can't. Then we realize that vbscript (the usual pick with ASP) is only about 60% as fast as mod_perl (yes I know you can use perl on IIS as well). Then there's stability. Duh, how many times have we had to reboot NT servers or restart the IIS service to get it up and running again? I actually have to run a service to restart IIS when it stops responding -- and even then it can cause the machine to crash by running out of resources. I'm as guilty as the next guy for quoting benchmarks to combat benchmarks -- but seriously, forget them. They are almost always biased. While the mindcraft tests are valid in proving that Linux has some scaling problems, they really don't translate well to the real world. Just ask yahoo or any other company using apache clusters running FreeBSD (which has similar scaling problems than linux). What about amazon.com? They've recently switched to apache. Do they not have a high amount of traffic as well as dynamic web pages? Will the target of these tests ever likely have a chance at building a site of such a size? Not likely. Mindcraft and Microsoft know that they can never include price or stability comparisons because they would always lose. In so doing, they lose any real world application. It's obvious this is just a blatant attack on Linux -- not an Operating Systems real world application. These test results succeed in what they were tailored for: to spread doubt. So even if they raise a valid point as to weaknesses in Linux, I question their validity.