"Anyway, in fairness to the P4, all these benchmarks we are seeing now DO NOT indicate the performance which is possible from this CPU. "
Correct, and this is an old, old argument, should you only concentrate on the benchmarks that play to a processor's strength, or use something that reflects the way the processor is being used?
Now SSE2 is new, it's hot, it's wow, but will be actually be a factor in the way that computers will be used? This is not to revive the discussion on why anyone needs a faster than a, oh 300MHz, machine. I am asking how soon, and to what extent, algorythms will be changed so that compilers can extract enough parallelism to make the SSE2 noticeable. And if those changes are made, will it impact legacy architectures? It is my perception that to truly take advantage of SSE2 in anything but a very samll subset of programs, there will be a need to make fundamental changes to the way programs are structured. And that is the crux of the problem. There is a learning curve associated with any change in programming style, and there are going to be a lot of companies that will decide it just isn't worth the effort. Just look at how long 16 bit programs hung around. |