SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: Ali Chen who wrote (74270)3/13/2002 11:23:15 AM
From: pgerassiRead Replies (1) | Respond to of 275872
 
Dear Ali:

You are attempting to estimate what happens at a future speed from a group of lower speeds. Did you attempt to take just two points at 1.5 and 1.6 to estimate 2.0? NO! Yet that is what you claimed for using 2.0 and 2.2 to estimate 3.0. Then you used a different method to interpolate a data point contained within the base of samples you used to curve fit. AND YOU HAVE THE SHEER ADACITY TO CLAIM THAT PROVES YOUR ORIGINAL THESIS? I can take any multipoint curve fitting algorithm and make it fit the data points and get a 0% error rate for any given data point. That proves nothing other than the fitting procedure worked.

The two situations are totally different! One tries to project a point not in the original data set, far out of it as a matter of fact. The other simply looks within a data set. Well my hindsight is as good as yours.

Look at what THG did when overclocking a NW. The performance increased to 2.6GHZ or so and then fell off as it went higher. It looked good until it hit some discontinuity and then fell like a rock afterward. Given a curve fit to many points between 2.0 and 2.2 would not have fit beyond 2.6GHz and would be far off by 3.0GHz.

I used a different method using SPECrate_int scores. They show a different picture. They work out to a different score than the same using SPECint. The differences could be that the granularity of the results is starting to cause problems and that mean of 3 tries is not enough to get really good answers. Confidence numbers are totally missing from SPEC submissions. Output checking is also not performed. Thus a CPU that may start giving nonsense answers when stressed will pass with flying colors. And the skeptic in me wonders how many platforms were tested till the best one run is submitted. This is done with other benchmarks like ScienceMark. What good is a run that yields erroroneous results?

SPEC is looking for a new set of programs for 2004(?). The reason I and many others do not use the INtel compiler is that for many programs, it fails to either compile or produce good working code. And Intel hasn't been able to fix it for those either. Yet, I could take gcc (and many other commercially good compilers (even MS C/C++)) and it would making good working code first time.

Pete