SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: pgerassi who wrote (106863)8/3/2000 1:24:36 PM
From: Elmer  Read Replies (1) | Respond to of 186894
 
Re: "That is just the point. Intel is trying to go to the bleeding edge of performance. The models were not tested against that assumption. The Intel "Model" is pushing the limit of voltage using new low cap dielectrics and higher dv/ds due to thin thicknesses. Just because a model works with tried and true silicon dioxides at large thicknesses (about 100A) does not mean it works with thicknesses of 20A and exotic materials. At the ranges discussed, quantum effects begin to take precedence. Therefore the old "model" may no longer be accurate. The original model does not take into account speed loss but absolute failure. A failure probability of 0.05% would assume that your yield would have to be 99.95% and you know that even on mature processes, yield is not 99.95%"

No Pete, you misunderstood what I wrote. The 1.133GHz device uses the same process as the other devices. It does not use a different oxide thickness. The model is valid. A voltage increase would affect long term reliability but not short term to a significant extent. Burned in units would already have been stressed beyond the 1.8 volt spec for the 1.133GHz device so early life failures would have been screened to a confidence level acceptable to customers. The 0.05% number is the number of devices that would test as good devices yet fail in the hands of customers. It is not a test yield number but a measure of defective devices shipped to customers. Defective being defined as a device which has a flaw in manufacturing that causes it to function in a manner differing from the designs intent or fail before reaching it's expected lifetime. No body ships 100% perfect material. Typically ASIC vendors ship ~5-10K defective units per million or about 0.5-1%. Processor vendors ship much fewer defective units because they have more resources to test the parts and for the price they pay, customers expect higher quality. Even AMD does pretty darn well in the quality department. The slowing down of the part over time is the result of the hot electron effect and does not appear within a few minutes or days, but over longer timespans. All production material would have a guardband to compensate for this. It's one of the reasons many parts seem overclockable. They better be or they won't run at their rated speed over time. Burnin experiments would have been done at elevated temperature and voltage that extend for 1000s of hours and be fully applicable to a 1.133MHz device.

Now take this into consideration and consider the possibility that Tom screwed up.

EP



To: pgerassi who wrote (106863)8/3/2000 1:57:20 PM
From: Paul Engel  Read Replies (1) | Respond to of 186894
 
Peter Principle - Re: "AMD has many reported stable overclocks of 1100 MHz (10%) thus a 1100 MHz speed grade is a far lesser risk."

Oh - like this one !!!

anandtech.com

The overclocked 1.1GHz Thunderbird would not run reliably enough under Windows 2000 to obtain any benchmark numbers from.

Paul