SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Advanced Micro Devices - Moderated (AMD) -- Ignore unavailable to you. Want to Upgrade?


To: neolib who wrote (256929)11/5/2008 9:50:34 PM
From: Elmer PhudRespond to of 275872
 
Apparently it is not as simple as you think. If you had been through the process you'd not speak of it as such a simple task. AMD can't do it over night, Intel can't do it overnight. Nobody can or they would have already. Don't blame AMD for this one.



To: neolib who wrote (256929)11/6/2008 12:58:37 AM
From: rzborusaRead Replies (1) | Respond to of 275872
 
Neo,

I'm baffled by the claims of complexity in transfering a working logic design like a GPU from one process to another. If the logic is verified, how much hand tweaking is still done in physical layout of digital designs?
.......//.....
The point here is not starting from scratch, but simply taking existing designs, getting them on a similar process, and doing some interface logic.


Neo, I suspect the problem is not just logic, but functions, the tweaks,. that can be managed on a board situation, with resistors, wire length adjustment, buffers etc. Like it is probably mostly timing problems and the lack of delays, read distance, that require more precise timing, just my two cents.



To: neolib who wrote (256929)11/6/2008 1:46:10 AM
From: Saturn VRead Replies (2) | Respond to of 275872
 
Err...but why do hotshot ASIC designers crank things out so much faster?

You are right that ASIC design can be very rapid, but you give up a lot of chip area and potential performance. Both CPUs and GPU push the limits of "manufacturable chip size", and require a lot of "hand optimization" to get to a manufacturable chip size and squeeze out the maximum chip performance.

The other problem is that the AMD CPU and ATI Graphic chips use different silicon processes, and probably use different design tools and methodologies. It will take a big investment in time to make it happen.

But I agree with with you, that AMD has slipped up in this area. The biggest rationale for the multi billion dollar purchase was putting CPU/Graphics on one chip. Two years and counting, and there appears to be nothing imminent. AMD has been sleeping on two many fronts, or stretched dangerously thin.