SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices -- Ignore unavailable to you. Want to Upgrade?


To: tejek who wrote (278524)3/5/2006 8:20:02 PM
From: combjelly  Read Replies (1) | Respond to of 1571931
 
"What about Moore's Law?"

What about it? Moore's Law just talks about the number of transistors doubling every 18 months or so. Now true, more transistors used to mean greater performance, but it doesn't have to mean that. And clock rates aren't going to increase at the rate they used to either. Besides, Moore's Law might not be in force much longer, there are all kinds of problems that manifest themselves at 90nm and below that might slow down the pace of shrinking the features.

And, unlike in the past, there just isn't a whole lot of low hanging fruit. The amount of instruction level parallelism that can be extracted from code, especially x86 code with its lack of registers, is pretty limited. Sure, for certain code sequences they can do more, but the universe of useful sequences that still can be optimized is getting smaller and more specialized. And nobody has come up with something new that promises radical increases in performance. If they have, no one has written a paper that I have found. So it looks like they will be concentrating on reducing the power requirements and throwing more cores at the problem. That's ok, we need more software that takes advantage of multiple threads. Lower power is a good thing, TDPs approaching 150 watts really complicate the engineering of systems and reduce long term reliability. But we won't be seeing a doubling of single thread performance every year or so like we used to.

So is the computer industry going to hit that long and painful slide to commoditization? Well, it has already to some extent. There are still a lot of things that can be done at the system level with specialized processors and what not to keep them out of the bubble pack at the checkout counter. And there is always Cell, an exciting idea that is deeply flawed in implementation. It is flawed because it breaks backward compatibility and is a bitch to program effectively. Both can probably be fixed, it just will take the right team with the right ideas. But it isn't there yet.

The K8 does have a lot of advantages over the P4 derivatives that Intel has. That will change with the new chips(NGA) that Intel has been talking about. AMD has some tricks up their sleeves also. So I suspect the market will be a lot more interesting than it has been for the last year or so.



To: tejek who wrote (278524)3/6/2006 7:54:32 AM
From: Tenchusatsu  Read Replies (1) | Respond to of 1571931
 
Ted, What about Moore's Law?

CJ had an excellent response, but I'll give my perspective as well.

Moore's Law, first of all, was kind of a self-fulfilling prophecy. Nothing wrong with that, since the industry benefitted a lot from having Moore's Law serve as a driving force behind research and development. Of course, we all know that Moore's Law can't continue forever, since we can't continue shrinking transistors below the atomic level. (IBM and others are experimenting with quantum computers, but if that ever works, that will probably be the barrier itself.)

Second, I'll share with you a little "secret." As we push forward from the current 90nm technology to 65nm, 40nm, and beyond, transistors themselves are going to be less reliable. Already we have to build in internal fault tolerance and detection logic in areas we never had to worry about before, because the process guys can't guarantee that the transistors we use to build our designs will be able to work as well as they used to. Fortunately, the smaller transistors mean we have room to add such logic, but then that starts eating away at the benefits of miniaturization. At some point, it's just not going to be worth it.

The image of a guy rolling a boulder up a steeper and steeper slope comes to mind. Pretty soon, we're going to reach a point where we can't roll that boulder up any further.

Tenchusatsu