To: Ali Chen who wrote (101413 ) 4/3/2000 3:09:00 PM From: pgerassi Respond to of 1572154
Dear Ali Chen: I understand this. I have written software for all flavors of Operating systems from embedded to MVS/OS (college days). On all computers from an OLD IBM 1400 to Olvetti Programmable Calculator to the latest HP 9000T Super Server. Just because he does not do "Performance Tuning" and "Profiling", he does not want to know that BAD software can make any computer look bad. I once took a program that took 24 hours on an IBM mainframe and reworked it to run in 5 minutes on the same machine. I know that most scientific applications need double precision to maintain significance of results. When one loses all significance, all output is garbage. It very easy to make garbage. Its a lot harder to give valid results. If SIMD using single precision is used, the output can be invalid. But, "it will have a better benchmark". I do not know if the SPEC organization checks for this problem. Most users of these benchmarks do not. From the 3DNow web pages, It was found that SETI algorithm used had a bottleneck not with the calculation of FFT but of moving data in and out of the CPU (and 256K does not help). When they changed to a new algorithm more efficient in the use of local access, the run times more than halved. Most scientific users grade reliability as a higher goal than speed. If they really want to be fast, usually they compile in C, profile some smaller runs, hand tune the most time consuming functions in assembly, and leave the rest alone. They also make sure that any output is correct and has significance. These users know with experience, which CPUs work best for "their" problem. Pete