SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: burn2learn who wrote (52433)8/24/2001 10:30:24 PM
From: AK2004Read Replies (2) | Respond to of 275872
 
b2l
your questions are way above my tech knowledge :-))
What are you running that requires mp?
The closest I get is by running modeling in distributable processing mode....
Regards
-Albert



To: burn2learn who wrote (52433)8/24/2001 11:36:29 PM
From: pgerassiRespond to of 275872
 
Dear Burn2learn:

First to your points, AMD has not shot the wad as far as 0.18u performance. They have yet to increase the width of the L1 to L2 bus, have not reduced the associativity of the L2 cache, eliminated exclusivity, went to hand layout and completely custom circuits (they use standard cells (yes they may use far more than the typical designer) instead of redoing every circuit from scratch ala Intel), increased the FSB frequency (they could go to 166MHz or 333DDR (PC2700) right now as many of their CPUs can be overclocked this high) and doing things like Intel by reducing the max die temp, etc. And this is off the top of my head.

As to putting two cores and a northbridge on a single chip, this reduces latency and allows for huge bandwidths between memory, CPUs, I/O (through a HTT), and remote CPUs (one or more HTT links). In addition, the L2s of both cores could be integrated into one larger one. All of this increases the speed of moving information around all the consumers and generators of it as well as reducing the latency between a request and the satisfying of it. Remember that simply moving signals in and out of a chip requires far more drive than moving it around the die. In addition, frequencies are much lower off chip than on chip. Why do you think that CPU clocks need on chip multipliers?

As to why Intel is not ahead of the curve, it is simple. Due to their copy exactly method of distributing production to many lines from a good running prototype line, their method induces latencies between fixing a bug found later in production and getting all lines to implement it. They can only order the equipment until after its running in production on the prototype. By staying to the tried and true, they can order sooner and reduce the time to ramp all lines because they know the equipment will work. The old way also worked due to the fact that they were not pushed to release before it was ready. By the time anyone knew it was coming, it was already being ramped. The competition from AMD has placed these latencies in their methods out into the public view. When they say they did not need copper for 0.18u, they meant that by the time they could check it out and get it to work (the tried and true test above), 0.13u would be cutting edge. So they waited for 0.13u. I believe the same is true of SOI. If it works, Intel would only be able to put it into their 0.10u process. Its too late for 0.13u process. So they place their best face and say it is not needed (they hope fervently).

AMD just needs to put it into their latest line. They can ramp it up in less than a year, if it works. Smaller companies can be nimble.

Pete