SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: wanna_bmw who wrote (59376)10/19/2001 6:27:08 PM
From: Gopher BrokeRead Replies (2) | Respond to of 275872
 
Additionally, changes within a compiler can be tested and implemented in real time. Software only needs to be recompiled to take affect. Hardware changes, on the other hand, require a long design and verification process that can easily last years.

There is the problem in a nutshell. Changing one line of code in the compiler is as risky as a processor core revision and requires as many years of verification.

You produce a new rev of the compiler and it has to work with a lot of legacy code out there. In practice each new rev of a non-EPIC compiler will break your code somewhere, generally in the area of the optimizations it performs. Which is why you tend to freeze the compiler version with the product version.

The more people try and increase the performance of EPIC code by enhancing the compiler optimizations the more they will destabilize the existing codebase. I don't see a happy ending.



To: wanna_bmw who wrote (59376)10/19/2001 7:59:15 PM
From: TimFRead Replies (1) | Respond to of 275872
 
History has shown us that semiconductor manufacturers have thrown transistors at the problem, and up until now, things have worked out, but we are obviously hitting new walls, in terms of power dissipation (discussed at IPF), limits of ILP (also discussed during MPF), and system design challenges (again, discussed at IPF).

But isn't Itanium a complex design with a lot of transistors and high power dissipation?

Tim



To: wanna_bmw who wrote (59376)10/19/2001 9:45:13 PM
From: combjellyRead Replies (1) | Respond to of 275872
 
"History has shown us that semiconductor manufacturers have thrown transistors at the problem, and up until now, things have worked out, but we are obviously hitting new walls, in terms of power dissipation (discussed at IPF), limits of ILP (also discussed during MPF), and system design challenges (again, discussed at IPF). Going the way of frequency directly affects power, EMI levels, system design levels, and others, while going the way of IPC increases die size, adds to thermal problems, and affects things in different ways."

Yep. There were power problems back in the bipolar days. That was fixed by using a different process. maybe CMOS has run out of steam. As far as the limits of ILP, we will have to see. Changes in programming techniques and algorythms may be the answer here. Or even compiler techniques.

"Up until now, there hasn't been a lot of desire to use the compiler to increase CPU performance,"

Now this is totally wrong. Compiler improvements and better algorythms have always been an active area of investigation. It just has always taken so much longer to actually show results.

Is EPIC the architecture of the future? I dunno, I don't think it really solves the basic VLIW problem of upward compatibility, it just pushes back so that it only becomes a problem every few generations.