SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: TGPTNDR who wrote (81198)5/31/2002 4:11:26 PM
From: Charles GrybaRead Replies (1) | Respond to of 275872
 
TGPTNDR, hmmm, maybe because 32-bit lasted a decade? Why do you think 64-bit will last any less?

C



To: TGPTNDR who wrote (81198)5/31/2002 5:18:47 PM
From: wanna_bmwRead Replies (1) | Respond to of 275872
 
TGPTNDR, Re: "IMO, by 1012 any 64 bit CPU is most likely to be in a telephone.

128-bit will rule the desktop,

256-bit will be taking the server markets.

All -- Tell me different. And why."


Tell me what kind of applications process integer data in >64bit chunks. Floating point data sometimes gets processed in 80-bit chunks, called extended precision, but that's only in the most demanding of scientific apps. Can you think of any future applications that might need 2^48 times better precision than that?

You might be thinking of future memory addressing needs, but that isn't the same as xyz-bit computing. A Pentium 4 chip as 36 address lines - even though it's a "32-bit" chip, while the future Sledgehammer will only have 40 - even though it's a "64-bit" chip. McKinley has 50-bits worth of address lines, even though it's 64-bit (though 50-bits is still enough for 1024TB of addressable memory). If in 2012 computers need >1024TB of memory, then Intel and AMD can increase the number of address lines on their CPUs. If the number of address lines exceeds 64-bits, it's not likely that they will need to change the architecture to accommodate that.

The only way you will see 128-bit or 256-bit computing is if marketing has a shot at exploiting parallel instruction sets, such as Transmeta's "128-bit" Crusoe chip, or the "256-bit" one they were supposed to launch this year. Of course, the latter CPU isn't really a 256-bit processor - it simply crunches up to 4 64-bit instruction bundles at a time. But if you decide to call that Crusoe chip 256-bit, you probably ought to call Itanium 256-bit, too, since it can process up to 2 bundles each of 128-bit instruction words (which each contain up to 3 instructions). And for that matter, you might as well call the Pentium 4 a 128-bit CPU, since it can compute SSE-2 instructions, which is nearly a complete instruction set that uses 128-bit registers. See how marketing can go out of control?

In terms of real CPUs that are >64-bits, I doubt the need for one will ever exist outside of very specialized scientific apps. Any mainstream application, or even any server oriented application, uses data that is largely 32-bits or less in size. Even the need for 64-bit data containers is rare - so much so that it has allowed x86 to thrive in spite of all the other architectures that have set themselves up against it, including the 64-bit PowerPC. That's why I expect x86-64 to eventually receive the same ho-hum on the desktop that PowerPC received when going up against the Pentium, 8 years ago.

The only way for AMD to be successful is if they show significant "across-the-board" performance improvements by recompiling code for their new extensions. However, the jury is still out on whether their instruction set has any inherent performance improvements. The extra registers may help in some applications, but it is unknown whether the improvement will be significant, or "across-the-board". As for improvements due to the actual "64-bittness" of the instruction set, I am expecting very little from that.

wbmw