A late reply to your posting 7655:
"I was wondering if someone out there can help me compare NT to Unix. I would think that UNIX as a program is more simple and efficient. After all, it was written back in the days when computers had more limitations. As this should be the case, how does the simplicity in the UNIX code effect its performance? Can NT ever expect to run as fast as UNIX, or would it take a rewrite to catch up with UNIX? NT was written under less demanding circumstances and more demanding circumstances depending on how you look at it. From the standpoint of efficiency, I can imagine that alot of involved coding went into NT because NT could afford to be more bulky than UNIX. On the other hand, NT has been more demanding on the programmers because they have Bill Gates (who I would think to be rather demanding) waiting for them to finish so they can meet the expected release date. Do you think these factors play a part in determining which operating system is more efficient? What are the implications of these factors on the larger networks in existence?"
NT and Unix cannot be compared without reference to the context in which they are being measured. It is a trivial matter to find cases where NT is better than Unix and vice versa. The question is then, which contexts will become important in the future, and if the operating system is important to that context, which OS is preferable.
Simple arithmetic, or just referring to Gilder's law, states that the utility of a network is proportional to the square of the number of nodes. Very simply, networking will outstrip the utility of standalone machines. Another analogy is specialisation in economics where the nodes can be dedicated to the tasks in a network, being more effective than each node replicating functionality.
On a network environment, the multiplicity of attached devices raises a large numbers of operating systems and protocols. The alternatives are a Tower of Babel of conflicting protocols, a proprietory standard like NT which will only talk with others who talks its protocols (PPTP etc), or open standards like TCP/IP.
Open standards have economic advantages over propriertory protocols. It is relatively easy for 3rd parties to write to the protocol which both removes the burden of development from a single source and encourages specialisation of purpose. The danger is that the barriers to market entry are significantly lowered and the products become easily commoditised.
In regards to OS's in a networked environment, there are two basic types, client side and server side. Neither NT nor Unix can function effectively as real time OS's. They are too large and too slow to handle asynchronous events as effectively as nano-kernel solutions like QNX which run as low as 50K. On the server side, NT will never be able to match Unix at the high end, but this does not mean it cannot be significant in its own right. Because MSFT controls the source code to NT, box manufacturers cannot optimise the kernel effectively to the hardware specifications to mimimise latency without MSFT's blessing. But for MSFT to do so would allow NT to fragment as Unix has in the past with applications being rewritten for each hardware configuration to take advantage of the hardware's idiosyncracies. On the other hand, this is exactly Unix's strength in hardware specialisation.
Again, in a network environment, specialisation is the key, this would favour Unix as the operating system of choice. This will be more so as more devices are attached to the internet as clients, requiring commensurately more powerful servers. |