SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: Elmer who wrote (106133)7/25/2000 11:31:57 AM
From: Road Walker  Respond to of 186894
 
Geez, a Concorde jet just crashed into a hotel in Paris.

John



To: Elmer who wrote (106133)7/25/2000 1:05:20 PM
From: pgerassi  Read Replies (1) | Respond to of 186894
 
Dear Elmer:

The only way to properly show this is not to use a compiler but hand coded assembly of the critical sections of code. This can be accomplished by merely removing the artificial requirement (widely abused) of the SPEC benchmarks that a compiler must be used to generate this code.

IMHO, assembly will show what the true maximum performance of each architecture is able to produce. Because SPEC is a fixed target and compilers are widely abused, this benchmark becomes more meaningless as time passes. The SPEC benchmarks on the exact same hardware change as much as 20% depending on the compiler used. This clearly is a compiler battle, not an architecture battle.

By the original goals of SPEC, the compiler used was thought to be just throw the code in and out pops the running code method of operation. It takes so much time to play with the "switches" that the task to be performed could have been assembled from scratch by hand. What they desired was that the compiler is a great time saver. The SPEC was never meant to compare the underlying power of the system but the implementation of the system.

The plan was you first get the software for the task at hand (usually using some standard library in source), you buy each of the competitors systems, compile your task on the default compiler for each system, run the generated code, and then decide which system did your task the fastest and for the least time, trouble, and money.

The current requirements have allowed the competitors to fight over what can be done at each of these stages and who has the clout to push their advantages through. This has totally made meaningless the original goals. A difference of 20% now means that the systems are compareable.

The only way I see to get around the "cheating" that makes this benchmark meaningless, is to change the environment. Here is my suggestion. We do away with how something is done. We define the tasks to be performed, the data to be used in performing them, and the correct results (given infinite precision). The resulting source used, the generated code, the time it took to be run, the output returned, some measurement on how close the results match the correct ones, and the all of the environment needed to be able to duplicate the results are returned to the central depository for validation.

Now everyone knows what was done, how it was done, and how good the results were. The public can drill down to whatever level they desire to compare each system on the things they desire. One number could be output based on a web (database search) given the standard agreed to criterion (for example each task is weighted equally, the overall q/c number requires 4 digits of significance, and best time for each different CPU). This would be the ideal benchmark.

Legacy code is what keeps the x86 ascendant. If it was not for legacy code, we would all be using cheap RISC machines instead for a long time. So by that yardstick, we agree, AMD beats Intel.

Pete