Intel and MSFT trouble might be related?
Intel seems to have some trouble lately. Some say it's because processors doesn't matter any more, because RAM is the bottleneck. Interesting point. Read also:
aceshardware.com
I once took a 200MB database from a Microsoft SQL Server and stored the data in an Access database. That reduced the data amount to 30MB. I was quite surprised. Then I exported the data to ASCII. 5MB. Then I zipped it. 1MB. Then I started to wonder what MS SQL server does to the data... The database was not especially selected for this experiment.
If RAM is the bottleneck of a modern computer, then the main performance increase should come from reducing data size and code size.
The Linux OS can boot from a floppy disk and works well in 4MB RAM. Windows 2000 doesn't run well in 64MB RAM. The next version of Linux does NOT use more memory. Windows 2000 uses much more memory than Windows NT.
All the years, that MSFT existed, code size has increased, not decreased. With MSFT's COM structure, the code size of a program increases, if just one COM object involved increases its code size. So all programs are bound to increase in code size. In Linux, on the other hand, programs normally compile the needed modules into the program, making the resulting total code size smaller, especially when compilers remove unused parts of the source. This is not possible with COM, because COM objects have to bring all the code to implement the published API.
In the long run, with the current trend in hardware, Windows systems will become slower and slower. Linux systems will too, but not as fast as Windows. The longer time we wait, the bigger the performance benefit of Linux will be.
Any comments on this is appreciated. |