SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: pgerassi who wrote (135158)5/15/2001 5:41:15 PM
From: tcmay  Read Replies (2) | Respond to of 186894
 
Response on overclocking issues

From: pgerassi

"Intel's own test vectors failed with P3-1.13G. Test vectors are for Manufacturers to get quickly to an opinion that a processor would work in worst case specification conditions. Most test vectors do not cover all possible cases (hard to do with computer complexity now). Linux showed a major fault that the theoretical testing failed to see."

Yes, I mentioned that customers sometimes uncover problems that the vendor missed. The Linux build was such an example.

However, this argues that speed marginalities are hard even for vendors to find--it does NOT argue for the notion that vendors are being too conservative in rated speeds. If anything, the 1.13 GHz experience would tend to make me even _more_ cautious about overclocking, not less cautious.

"Point 2:

"It has been shown that both Intel and AMD have down binned processors to meet contracts. That is selling processors at below their actual bin because not enough at the requested bin existed. Overclocking is one way to find this out. Celerons were frequently down binned in the old days. AMD has shown this behavior many times before and continues to this day."

This may sometimes happen, but it is neither consistently the case nor predictable. And the sheer competitiveness of the marketplace, especially with both Intel and AMD competing fiercely for the speed crown, suggests strongly--to me, at least--that neither Intel nor AMD is deliberately downbinning their fastest processors and thereby both giving up money and bragging rights.

"Point 3:

"AMD itself puts a blessing on Kryotech's gas cooled (-40C) boxes as they extend their warranty to cover systems sold by Kryoptech and its distributors. Currently, this extends to Tbirds at 1.8GHz. Notice that that is a higher speed than even P4 at this time. ..."

Which says essentially nothing about the issue of overclocking in a system without special refrigeration. That's a separate issue, which I didn't even address in my post. If a vendor warrants an increase in clock speed _with_ this refrigeration, and tests confirm the confidence, then fine. But I doubt AMD warrants overclocking the Thunderbird absent such special coolers.

"Point 4:

"Lifetime of CPUs is limited more by advances in speed and features than by lifetime concerns. I have seen both Intel and AMD CPUs last over 10 years even in heavy use. I upgrade every 18 months or so. I can afford a CPU failure in 2 years."

This is an unpersuasive point. The likely failure effects of overclocking would be failure to execute some piece of code properly (speed path issues), or rapid failure ("burn-out"), not some reduction in lifetime from, say, 400 years to 300 years. (The actual lifetimes are enormous, as best we can tell from Arrhenius-type calculations of activation energies of various mechanisms.)

"Point 5:

"Most systems crash due to software problems and bugs than due to hardware failure. If you leave a Windows platform on for a few months, even idle, doing nothing at all, it crashes. Linux and the major UNIX systems do not show this tendency and run for months on even overclocked hardware."

Orthogonal, and unrelated, to the point about the chip failing to execute code properly. That Windows crashes more often than Linux says nothing very interesting. Ironically, recall that it was a Linux build that crashed the 1.13 GHz processor.

As a Gedankenexperiment, imagine someone had taken a "1 GHz" PIII and "overclocked" it to "1.13 GHz." Had it failed, as certainly happened, would this have proved that Linux has more overclocking problems than Windows has?

No, of course not. The issue was with the chip failing internally to properly executed spec'ed instructions at the slightly higher speed. Should have been a clear warning signal to all of the overclockers out there.

"Point 6:

"Overclocking determines the margins for a given system. If it can be overclocked by 20%, it will be more stable than a system that doesn't overclock at all. Thus overclocking can be a test vector as well as die temp, HSF used, etc. I have used this method many times before. One system, a 386DX-20, has run for over 12 years before needing a replacement (bad chipset on MB) as a server."

I at first agreed with this point, assuming you meant "If you can overclock by 20% and it all runs fine, then it should have more margin back at the normal speed." This does indeed imply that the system board design, etc., has plenty of margin.

However, it says nothing in particular about whether the chip is stable in all of its subtle patterns at the higher speed.

Again, look to the Gedankenexperiment of doing a mere 13% overclocking of a 1 GHz PIII to 1.13 GHz. The fact that the board and memory and suchlike could be overclocked this way said precisely nothing about subtle errors in the CPU itself. Which, as it turned out, only showed up at a customer site.

--Tim May