SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: ted burton who wrote (35725)4/14/2001 9:36:57 AM
From: dale_laroyRespond to of 275872
 
Ted Burton, "I am an Intel employee, but the views expressed above are my own, and are not necessarily shared by my employer"

Welcome to the thread. Although GhostWhisperer over at Ace's Hardware will not reveal his actual employer, I myself and many others suspect that he is an Intel employee. He is one of my favorite posters. BTW, I agree with him more than I disagree.

Your points are very well taken, but I do have to wonder what this says about the validity of the 2.0 GHz demo last year. I also have to wonder about the implication of this thermal management technique with regards to a mobile variant of P4. Additionally, I have to wonder about why Intel chose to use a 50% duty cycle @ 1.5 GHz until the temperature drops within the acceptable limit, rather than drop the clock to 1.0 GHz and lower the voltage for a 10% greater reduction in power consumption.

Anyway, while I do not discount the ability of P4 to compete in the market, and I doubt if very many purchasers will truly be disappointed with its performance, because most purchasers would not be able to tell the difference between a 667 MHz P-III and a 1.333 GHz Athlon, I just do not see how P4 can do any inherent damage to AMD prior to the second half of 2002. Any damage AMD suffers will be from self inflicted wounds.



To: ted burton who wrote (35725)4/14/2001 10:42:51 AM
From: Dan3Read Replies (1) | Respond to of 275872
 
Re: Asserting that many, if not all, power hungry apps will blow away the TDP, & suffer a 50% performance loss! Doesn't anyone wonder why he doesn't offer any concrete examples of this truly nasty behavior?
Answer... There are no nasty examples.


The biggest handicap the Athlon core has had has been its increased power consumption relative to the PIII core. This has limited the use of Athlon core processors in small form factor business desktops, "appliance" PCs, and notebooks. It has also raised platform costs since Athlon core systems needed bigger power supplies and cases with better airflow. Intel's official power consumption numbers indicated that this situation would continue, if to a lesser degree, as Intel transitioned to the P4.

The article Peter Luc posted makes it clear that this is not the case. Despite the headline power consumption figures Intel has featured, P4's power consumption and cooling requirements turn out to be nearly identical to Athlon's, perhaps even more challenging. This is a very significant development because it removes one of the barriers to Athlon penetrating the corporate market.

Performance hits due to power throttling have been observed with PIII chips. The large heatsinks, side vented cases, and restricted ambient environment used in P4 benchmark systems have not shown such affects (that I'm aware of, at least). But if Intel ever starts selling large numbers of these systems into the mainstream, and less expensive cases start being used and pushed into corners and allowed to accumulate some dust and lint around their heatsinks (which is what happens to many PCs), there is a possibility that these affects could become quite common.

Regards,

Dan



To: ted burton who wrote (35725)4/14/2001 12:55:30 PM
From: pgerassiRespond to of 275872
 
Dear Ted Burton:

You are making some assumptions that on the face are probably not true. one, idle power is not 0W, but somewhere between 5 and 10W in stop grant mode. And if not in that mode, which is not instant available, like in most apps, it is more like 20W to 30W. Most apps use a number between 33 and 50% between the idle power and max power. That would be about 40W to 50W. If someone actually designed to max TDP at 54.7W, something that required serious number crunching like a non T&L card in a game or a simulation, would cause inexplicable speed losses.

Another assumption is that the guard bands are all in the worst case end. This is not true as Intel quotes max power at nominal voltage, not max voltage. Furthermore, you assume that the thermal limiting circuit is at the low end and this would state that the circuit is quoted at the high end. Thus, the lower performance would occur even sooner.

Given the above, if a designer were to use the quoted spec, the P4 would cycle for most typical power user loads, just what the CPU is targeted to. Because of reduced max allowed temp (the start of duty cycling), the designer has to use a larger heatsink with a lower C/W figure. But P4 is already at the higher performance end of heatsinks, those with low C/W specs. It is much harder to gain 1/4 reduction in C/W heatsinks, say twice the cost to do it. That would increase max power from 54.7W to just 72.9W. Still has some cycling occurring even here.

To figure a heatsinks C/W, you need a few pieces of information, ambient temp (temperature of the air at the inlet of the HSF), max desired temp of the chip (measured at the surface, or in P4's case, the max temp of the heat spreader), and lastly the maximum thermal power needing to be removed. You divide the differnce between the ambient and spreader temperatures by the thermal power, and that gets you the maximum allowed C/W of your HSF. An economy HSF gets around .55, a premium HSF gets .4, a great HSF gets .25, and anything below that is just superb. Take an example, for the P3 at 1.13GHz needed a max temp of no more than 55C, assume that the case is at 40C, and the maximum thermal power is 35W. Dividing 15C by 35W, we get .42W thus requiring a premium HSF at the least, but given the need for a 10% guardband, a great HSF is needed. Now use 54.7 for the P4 and you get .27 and at 73.9W, you need less than .20. This gets you into the blown case to reduce ambient to say 30C where you can use an HSF with less than .33 (still expensive but not over $50). A 1.33GHz Tbird, which has a max temp of 95C and a TDP of 72W would need a HSF with a C/W of less than 0.76, far less than P4 but you need to take into account the smaller die size (reducing this to say .5 would be prudent).

You see that small changes in TDP can cause large needed changes in cooling. All of these CPUs including the Athlon require cooling solutions commonly associated with supercomputers till a few years ago (well they outperform the older supercomputers (A Cray 1-XMP generated 22 Mflops (double precision) sequentially, 0.75 Gflops peak, had 512MB of memory std, and cooled with LFTE)).

This is probably as much an argument with the policies of generating the specifications for P4 as in typical systems. If Intel would have said that TDP should be 73W minimum for 1.5GHz P4, if you want the P4 to not be a 750MHz dog, there would be far less to complain about. But Intel buries that information in fine print and was marketed as something it was not (a lower power CPU).

FYI: 40C is 104F (not atypical for a packed economy case with no auxillary fans) and 55C is 133F (not uncommon air temp (ambient) in the desert or mountains in the sun at noon during summer).

Pete



To: ted burton who wrote (35725)4/14/2001 2:54:26 PM
From: revision1Read Replies (4) | Respond to of 275872
 
Hmmm...
"The writer makes a huge issue out of the apparent disparity between the 54.7W thermal design power (TDP), and the 73.9W max power. Asserting that many, if not all, power hungry apps will blow away the TDP, & suffer a 50% performance loss! Doesn't anyone wonder why he doesn't offer any concrete examples of this truly nasty behavior?

Answer... There are no nasty examples."

Actually, there are probably many examples. The logic synthesis and place and route for the large FPGA’s, 1 million gate versions, is processor intensive. Based on the system performance this can take 1 hour to 8 hours. The systems I use are not much more then the processors and 1 Gbytes to 2 Gbytes of memory. To date the best performance that has been achieved is with Athlon based systems. A P4 system has not been used.

Intel claims that the best use for the P4 is in data streaming applications. Is this classified as a Processor intensive application? If addition thought is given, there are range of applications that will hit the thermal performance wall on the P4.

The thermal inertia of the silicon will almost guarantee that this is not at a microsecond time base. What ever rate this is occurring at, the performance will less then the rated frequency of the P4.



To: ted burton who wrote (35725)4/15/2001 10:14:08 PM
From: Joe NYCRespond to of 275872
 
Ted,

Welcome to the thread.

Suppose you are correct, and no app hits the limit on 1.5 GHz P4. But what do you think will happen with 1.7 or 2 GHz P4s running at higher frequency, with potentially higher voltage and significantly higher power consumption?

If you slide the typical various power consumption rates (idle, typical, average, maximum) up because of higher clock speed and higher voltage, there may be several applications or benchmarks that may be over the limit.

Now just imagine a PR disaster if someone demonstrate a repeatable example of a 2 GHz processor running at only 1 GHz for periods of time. Can you see the huge PR disaster for Intel, when the only selling feature becomes suspect? What's left?

Joe

PS: My guess is that the author of the article reads Intel thread, where a few of us discussed this very subject a couple of months ago.