SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: Joe NYC who wrote (150971)12/4/2001 1:06:34 PM
From: wanna_bmw  Respond to of 186894
 
Jozef, Re: "I think the parts you highlighted show the flexibility of Hammer design."

That's obviously been a very focused goal at AMD - to continue the "one size fits all" approach, and design a processor that can meet the challenges of high end and low end, mobile and desktop, server and enterprise, and everything in between. I believe that Hammer is the best design that AMD could go for. It will probably deliver a solid solution for all the segments it is targeting.

However, I think the main limitation of this approach will be that a Jack of All Trades will always be Master of None. I think that Intel, going with several different CPU architectures and micro-architectures, has a better opportunity a few years out. Intel is using Netburst on the desktop, Banias in mobile, and IA-64 in server. The strengths of these architectures are best fitted for their markets. If successful, I think Intel could be better in any one of these markets.

Then again, this plan doesn't really take any effect next year, when Intel will have to continue to rely on Netburst to take up the majority of volumes in all of Intel's markets. While I think Hyperthreading will be what Intel needs to compete in the server market, I can't see a Netburst based mobile chip. Intel will have a hard time keeping mobile market share in 2002.

In 2003, things might change. Hammer will have ramped up, and AMD will be filling every one of K7's holes with a K8. It will definitely give them much more of an advantage. However, Intel will launch Banias, Netburst will ramp in frequency, and Itanium will have a much higher performing McKinley or Madison core. I see competition remaining in the months to come.

wbmw



To: Joe NYC who wrote (150971)12/4/2001 1:14:28 PM
From: Tenchusatsu  Read Replies (1) | Respond to of 186894
 
Joe, I reread the interview of Richard Brown from VIA, the one posted by WBMW. I think Brown has it backwards, probably as a result of pumping up VIA's own products.

The CPU is always sensitive to latency, much more so than any other component in the system. Putting the memory controller right on the CPU die helps to lower that latency. I believe this is where AMD will experience the majority of their estimated performance gains with Hammer.

Meanwhile, moving the memory controller away from AGP and other I/O doesn't affect performance all that much, because I/O is more concerned with bandwidth, not latency. The exception would be nForce-like integrated graphics, but integrated graphics are never meant to have cutting-edge performance, not even nForce.

All of the above is true for single-processor systems. However, when you move to multiprocessor systems, it's a slightly different ballgame. The good news about the Hammer design is that memory bandwidth scales with the number of CPUs. The bad news is that the Hammer approach is much tougher to implement than a shared bus architecture like Pentium 4, Itanium, or McKinley. Not only that, but without ccNUMA support on your OS, average latency will suffer because only a fraction of your memory accesses can be satisfied by the on-die memory controller. ccNUMA support is usually a feature exclusive to high-end server OSs.

Tenchusatsu



To: Joe NYC who wrote (150971)12/4/2001 6:28:35 PM
From: ptanner  Read Replies (1) | Respond to of 186894
 
Joe, re: "In a low end [Hammer] system, the memory can be controlled by offchip memory controller, and memory can be shared with graphics"

Wouldn't it be easier just to integrate both the graphics and graphics memory into one chip rather than adding a separate memory controller (which seems more complex to me)?

With all the discussion of the potential to provide 8MB of cache on a northbridge chip (Micron's Mamba or whatever) wouldn't it also be possible to provide both integrated graphics and graphics memory? I know the current integrated graphics chipsets use UMA but with smaller process geometry I thought some extra real estate might be available on one of the existing chips given the need for a chip size supporting the "package connections."

-PT