SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD)
AMD 217.59+1.1%Dec 3 3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Ali Chen who wrote (74175)3/11/2002 11:40:42 AM
From: pgerassiRead Replies (1) of 275872
 
Dear Ali:

How many texts have you read? It must be a very small number and you must have forgotten much since then. P1 = P(x1, y1). This is similar to p2 and p3 (the target performance you wish to project at the new clock). The base is the two sample frequencies and performance ranges.

As to the std. dev. being not small, look at two runs using the same hardware, software and programs assuming the disclosures were correct. Take the 2.0 GHZ NW and the January and February tests of the last SPECint program twolf. The amount varies by 7 units out of about 627. That's a std. dev. of about 0.8% and since the std. dev. of the whole is the product of all the base scores for each program tested, the overall std. dev. actually increase by the average of the individual std. devs. One, vortex has a std. dev. of about 4% and the total is around 1%. Don't believe? Take the lowest score for each test between the two runs out of six total. Take the score using those (Prod(scores)^1/N) where N is the number of programs. Do the same for the highest score for each test. Pretty big range given that to stay within 3 sigma for your estimate you must be below 0.3% variation.

You are wrong about using seconds instead of the ratios. A 1 second difference for a program that runs 100 seconds is much grater than a 1 second difference in one that runs in 1000 seconds. Vortex has a 13 second range for a test that runs 180 seconds on average. That's over 6%! The weights have to change for each program since the base system doesn't run each the same amount of time. Using seconds therefore is stupid.

And you have yet to take into account individual program variance. I looked into the variances abnd they swamp your projections. And all you had to do is look at SPECint_rate to see this. For a 10% boost in clock, Performance went up just 4.5%. Thus, a 50% boost will be less than 22.5% not, your 32%.

In scientific work one uses more than one model to validate the method gives reasonable answers. The NOAA uses 7 different models to verify that the forecasts are reasonable. You failed to do that. I can see why. The work increases by at least one order and more like two orders of magnitude.

There is a boundary limit. It exists. SPEC allows such work arounds that it is hard to see it. Normal server type loads find it much faster.

Pete
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext