To: Petz who wrote (99569 ) 3/23/2000 11:19:00 PM From: THE WATSONYOUTH Read Replies (2) | Respond to of 1571124
Re; "A while back I proposed a theory that Intel's CuMines suffer from the same yield problem as AMD's K6-3. Namely, that too much leakage, especially in the cache transistors, resulted in excess noise, which meant that switching speeds had to be reduced. (The K6-3 never got above 450 MHz and the vast majority only made it to 400 MHz. Also, it used a 2.4v power supply vs. 2.2v for the K6-2. Higher Voltage=more noise immunity.) The notched gate on the CuMine may increase the leakage current. Others have already reported that, for example, current consumption of the CuMine in its "power-saving" modes is much larger than the 0.25 non-notched PeeWeeIII. Some current saving modes do not even work. So cutting the power to half the cache could reduce the noise and allow the CuMine 128K to run faster than the 256K version. If these Celeron III's, or whatever they'll be called, overclock a lot better than the CuMine, it may confirm the theory outlined above." I think AMD was forced to run extremely short channel lengths and low Vts on the K6-3 to try to make up for the short pipeline design which could not achieve high MHz. Whether this caused additional problems in the SRAM array, I don't know. Regarding CuMmine power saving modes, there is a clear difference between the .25um and .18um generations. As Vcc drops from generation to generation, the device threshold voltages must scale very much lower as well to achieve comparable current overdrives. It has been generally impossible to scale to lower threshold voltages (to achieve targeted performance increases) while keeping the device off current at minimum channel length the same from generation to generation. So for instance, the off current at minimum channel length for the .25um generation (2V) may have been 1-2 nA/um of channel width. For the .18um generation (1.7V) the threshold voltage may have had to be 80mv lower to achieve an acceptable performance increase even at much shorter channel lengths. This may translate into 10X higher off current at minimum channel length (10-20nA/um of channel width) Because of this, the deep sleep modes disappear. I don't think the disabling of half the cache is any more than part of Intel's segmentation strategy. These chips will certainly not be very high MHz (below 850MHz) and as such will not have particularly short channel lengths or high device off currents. So I guess it's just easier for Intel to disable half the cache then design a new 128K CuMine. The 128K in .18um groundrules probably only costs them about 12mm2 of silicon. I doubt the notched gates play any role in all this. But, then again, only Intel really knows. just my take, THE WATSONYOUTH