Tom,
In my opinion, the real problem w/NT is that it is not optimized to work on a specific set of hardware. Those in marketing think that is just great, but we're talking about OS software, here, not applications.
NT scales poorly because of their locking scheme. In addition, their thread libraries are not the exclusive property of user space, so new pid's have to be allocated whenever a new thread is opened. That requires a software trap into kernel space, which is relatively expensive.
Check out Solaris if you want to know how locking should be done in mp systems. One thing to remember: when locking data you have to be absolutely certain that two separate threads do not write to the same data at the same time. In computing environments, latencies that are measured in micro-seconds are too indeterminate. Most locking, that is really effective, like mutex, relies on the hardware, which includes atomic instructions that won't allow data corruption on a write request. MSFT can't do that, because they don't write NT for any specific machine. However, NT is written to run mostly on PC's. PC's aren't powerful enough for enterprise systems.
The point is, the only companies that have any business writing operating systems software for enterprise systems are those who manufacture the hardware to go with it. The assemblers, compilers, and OS's can be designed specifically to work with the hardware it runs on, and nothing else. Hardware design and implementation is the real key to optimal performance, not software, and as the hardware gets more and more sophisticated, you need software engineers to optimize your OS to take full advantage of it. Small differences in kernel design can make huge price/performance diferrences.
MSFT doesn't really have the engineering talent to bring it off. That's why NT 5.0 completion dates have slipped again. They have a significant learning curve up in Redmond and it will take some time before they come up to speed.
cheers,
cherylw |