SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: wbmw who wrote (236999)7/24/2007 5:33:00 PM
From: TGPTNDRRead Replies (3) | Respond to of 275872
 
Wanna-b, Re: LOL, of course it is when AMD tests at 35C degrees and Intel tests at 50C degrees. Leakage goes up with temperature, or didn't you know that...?>

Does leakage traditionally double with 15 degree C increase -- from 35C to 50C? Is there a spec you can point to on that? Does it differ depending on design? Find me a case where that 15C will double electrical usage and how it relates to INTC/AMD designs. (Folks, please not the restraint I'm exercising here.)

Re: All Intel has to do is receive a HALT or MWAIT instruction from the OS in one of the cores in order to go into C1 state. C1E occurs when both cores are halted, and SpeedStep takes both cores to the minimum voltage and frequency before stopping the clocks. AMD cores work in the same way. If they want to get any benefit from C1 state, they will take time to transition to the min P-state, and then disable the HTT links, put the memory in self-refresh mode, and tri-state the DDR pins in order to go into C1E. You don't think all of this happens in a shorter amount of time, do you...?

Yes. Can you point me to on the process specs? I'm really curious. Is it *JUST* personal opinion or do INTC and AMD have specs out there on this subject?

You might want to avoid the use of LOL where it's personal opinion unless you want to get it back.

Re: LOL, you still don't have the name right. It's "Griffin", Pete. Please try to get the terminology correct!

Whoops, Attack the persona. I know how to do that. Wanna see me try?

I'm not yet serious about it but I could be. IF CJ doesn't kick one of us off.

-tgp



To: wbmw who wrote (236999)7/24/2007 9:01:53 PM
From: pgerassiRespond to of 275872
 
Wbmw:

You fail to see notes 1, 2 and 4. Entries in that column are not maximum thermal power for any listed CPU as they occur at other than maximum Tcase and are not in either profile. If the 6850 uses 8W then the temperature Tcase has to be 48.1 off table 28. Since its at Tcase of 35C per note 4, it can't be maximum thermal power. Note 6 only refers to a CPU at the appropriate thermal power profile and the Maximum TDP column.

Of course by Axiom 3, Intel gets the benefit in unstated cases by Intel supporters even when notes contradict it. Note 5 directly contradicts note 6 and its not applicable to any 8W S1E idle CPU (no note six on either column or row of such CPUs. The only way note 5 and 6 can be interpreted as both being true is that 5 refers to any part at any time whereas 6 only refers to the maximum column and maximum Tcase in the previous column. It specifically refers to tables 28 to 32, not 27.

And then there is Axiom 1, AMD can't have lower individual TDPs as well. But I have such a CPU, a A64 3500+ with a OPN rated 65W TDPmax, but a 50W individual TDPmax rating. While AMD states that no CPU it sells will exceed its OPN ratings, Intel flatly states in note 5 that the CPU can dissipate more than is specified. Which makes all of those CPUs with note 6 suspect.

Frankly all of these thermal maximums that are thrown about are well outside what a typical OEM designer would allow these processors to get except maybe the minimum Tcase. Individual component variances demand that large margins be maintained to handle that many components and environmental conditions are at or beyond the limits. HSFs get dirty, computers are run in dusty closets with little or no ventilation and power companies sometimes surge well above normal voltages in some places. I have seen a business who frequently blew power supplies because an electroplating shop was right next door. In the morning, the rods would drop and instant blackout and voltages sagged for a while. In the afternoon, the rods would come out and instant surge sometimes doubling the voltage for a time. A buck boost transformer solved that problem as the power company says its within its tolerances. Now a days it would get sued fast and lose big.

"More wishful thinking, Pete. All Intel has to do is receive a HALT or MWAIT instruction from the OS in one of the cores in order to go into C1 state. C1E occurs when both cores are halted, and SpeedStep takes both cores to the minimum voltage and frequency before stopping the clocks. AMD cores work in the same way. If they want to get any benefit from C1 state, they will take time to transition to the min P-state, and then disable the HTT links, put the memory in self-refresh mode, and tri-state the DDR pins in order to go into C1E. You don't think all of this happens in a shorter amount of time, do you...? LOL."

Wishful thinking on yours. Look up the settle times on voltage changes for the VRMs. It takes 25us just to be certain that the voltage isn't too high or too low in addition to the VID switch times and there are many because the VID can only change 0.0125V each step. Add them up and you are quite a few milliseconds per transition. All the other things take far less time. A divisor change for example takes a few nanoseconds and less than 1 microsecond to be stable. The only thing about power on sequences that have a time on them is Vcc overshoot times. Look at section 6 of the datasheet you linked to. So adjusting the voltage is slow compared to all else. If you truly design these circuits or even programmed them, you would know this. And you would see that bringing up a supply to a given voltage takes less time than going up by increments. You see this by looking at the TMA2 activation event diagram (figure 26 on page 88). Look at how long it takes to ramp down and up the voltages compared to switching frequency.

And you would also see that the FSB is disabled during C1 Stop Grant. From C4 It has to do all that it would if it were powering up for the first time plus restore state.

"C4 is deeper sleep state, Pete. Sheesh, at least get the terminology right.... And deeper sleep does take more time to come out of, because you get better power savings. Look at the performance measurements of Merom in laptop (power savings) mode, compared to desktop mode. Performance degrades about 0-5%. Big deal, when the results usually lead to large increases in battery life that Turion cannot provide, even with all of its features turned on...."

You do see state S3 in those OPN diagrams, don't you? With chipset support, it can go into that state and wake out of it. Its equivalent to C4 where the Turion X2 is shut off except for the memory refresh. 350mW is a whole lot smaller than 3.79W. Even the big A64 FX-74 can go into that state. As you say it takes a little longer to wake up, but given that the memory just needs to go active and is directly connected, it fills the smaller caches much quicker and thus has a smaller performance loss for a much shorter time. Now that AMD has a chipset division, it can implement that state for its mobile, desktop and server CPUs instead of pleading with its partners.

"Oops, wrong again, Pete! 2.6GHz Merom @ 35W is scheduled for this September. 2.8GHz Merom Extreme Edition @ 44W is coming at the same time. By the way, when are OEMs going to start offering Tyler at the expected 2.3GHz, Pete? Dell and HP only seem to go to 2.2GHz, but they do have 2.4GHz Merom!"

Sorry Wbmw, you have been bitten by Axiom 2 again. Use Intel's future offerings as available now against old AMD's. TL-66 is currently selling since May, 2007. One link to a TL-66 notebook:

cyberpowerpc.com

See top listed Turion 64 X2 offering (TL-66 for $144 more).

"Oops, no Pete, sorry. Fusion is a 2009 event. Everyone seems to have heard the news except you. What makes you qualified to make these arguments, when you are wrong, over and over and over and over and over again...?"

Did you see the previous posts on 45nm TSMC Fusion speculations? Likely yes. And dismissed them due to Axioms 1, 2 and 3. AMD just can't blow up Intel's future plans! Placing a GPU in a MCM with a CPU? Only Intel is allowed to do that! Pulling it forward? Impossible!

"Oh, yeah, I forgot about all those developers who are going to optimize for AMD's ATI instruction set and not for Intel. It will be just like AMD64, when Intel has no response... Oops, nope, you're wrong again. Maybe you've forgotten, but Intel has integrated graphics on Nehalem as well. You are just filled with wishful thinking today, but unfortunately, the little ATI bomb has turned out to be a wet firecracker. It's losing money for AMD, quarter after quarter, and AMD still has to pay $5.6B for it, not to mention the debt interest payments, amortization, future headcount burden, etc. What a complete boneheaded move by Hector. He tries to walk on water, but has a 500lbs weight around his neck. Kerplunk."

More of the same garbage. Intel doesn't have plans for an integrated CPU GPU. Intel doesn't have a stream processor of its own. It will likely copy it just as they followed AMD64 and tried to claim it as their own even though it was copied straight from AMD's documentation (including the errors). Developers are writing for ATI and nVidia's GPUs though. And they are writing it for movement through cHT as well. Being a few mm away from a AMD CPU through a fast wide coherent link will allow all of the current software (ATI stream processing) to work plus any new developments. The APIs are already present with the required tools.

And you forget that Intel can't do this with its own stuff. CSI is a vague shadowy set of goals, their GPU doesn't perform well and there are no APIs or tools available (kinda hard given that everything is in flux). Yes Intel could throw money at the problem, but it takes time for its development. Look how long it took Intel and HP to get a decent Itanium compiler.

So you fall back on Axioms 1, 2 and 3. Since Intel doesn't have one, AMD can't either. AMD can't get a MCM with a GPU done in time. The only real change is to add a cHT interface to the GPU and AMD has plenty of experience doing that even in bulk silicon. TSMC can't be sampling 45nm per Axiom 3. AMD can't be sampling 45nm SOI per Axiom 3. Forgetting that it can do that at Fishkill, right now. Folding@Home was a fluke per Axiom 3. Forgetting that AMD has plenty of GPU software makers in house plus many in the game engine and HPC businesses. Many high end media software makers would jump for a 10x speedup on a standard computer. If their competitor did it, they would quickly be up the creek without a paddle. They may feel that a 10% speedup isn't worth following, but they wouldn't ignore one with a 900+% speedup.

Wbmw yells "Intel can do no wrong!" when passing through a notorious avalanche zone. He gets buried and crushed by the rebuttal. Sign placed above has a word on it "STUPID" with an arrow pointing down.

Pete



To: wbmw who wrote (236999)7/25/2007 7:01:47 AM
From: Dan3Read Replies (2) | Respond to of 275872
 
Re: I forgot about all those developers who are going to optimize for AMD's ATI instruction set and not for Intel. It will be just like AMD64, when Intel has no response...

Good point! - having AMD64 let AMD pick up server lines from Dell, HP, and IBM, notebook lines from Dell, Toshiba, etc., corporate desktop lines from Dell, HP, IBM, HPC lines from Cray, IBM, Dell, HP, etc.

Now they're starting from this much higher level, getting ready to make the next jump.