To: pgerassi who wrote (103721 ) 10/26/2003 11:46:10 PM From: Pravin Kamdar Read Replies (2) | Respond to of 275872 Pete,How many points are used in the coverage ranges? 2, 3, 5, 10 or more holding all other parameters stable? First of all all ranges would be covered. For each of p and n-type models, there would be a very minimum of four models: one for minimum length and minimum width, one for minimum width and all longer lengths, one for minimum length and longer widths, and one for longer widths and longer lengths. More subdivisions near minimum dimensions may be needed. If analog functions are required on chip (especially any with cascoded devices), there may be separate low threshold devices with their own model sets, but I doubt that their high speed logic process has this. Additional model sets may be required for different temperature regions. In addition, high current and low current models (representative of 3-sigma process variations) would be supplied for worst case analysis.Did they check all transistor combinations or some more limited set? I don't know what Intel "did", but most attention would be on minimum length and small width devices (for obvious reasons). Longer length and wider width regions will scale accurately to infinity with L and W parameters. The parameter extraction software will aid in determining where region boundaries need to be defined to remain within a desired error band. And if you do not see a minor leakage mode in 130nm, do they test for it in 90nm? Or do they assume it will fit in the curve best fit from 1um down to 130nm and extend the results to 90nm? Did they use the same successful model at 130nm and plugged in the numbers for 90nm? No way. Nothing is extrapolated between processes. The models used (BSIM version 4; maybe higher now) are based on predictive device physics equations, so they can be used for a sneak peak at finer geometry devices, but once a process is qualified and being used, it would have it's own accurate models extracted from special test chips. Adding something like strained silicon makes prior model sets completely useless. Of course leakage is nothing that they are overlooking. It is the biggest problem going forward. Off currents as a function of device size and temperature would be characterized with a fine tooth comb. I was thinking that some extreme hot spots might be putting some devices into temperature regions that were not adequately modeled. But, that is probably not it. It's probably some other failure mechanism that they did not anticipate. Or, as Otellini said in the CC, maybe there is nothing wrong with the process, and they just delayed it to add some stuff to stop Athlon64 (as Kap and Dan Niles have suggested). Pravin