SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Dan3 who wrote (83233)12/16/1999 1:19:00 PM
From: Tenchusatsu  Read Replies (1) of 1571171
 
Dan, <But their demands on memory aren't for streaming huge blocks of memory, they are demands for many smaller bursts from random locations.>

That's where huge processor caches comes in, at least for Xeon servers, and perhaps for Itanium as well (4 MB of off-chip L3 cache in the Merced module). As for EV7, well, that's why they integrated memory controllers right onto the processor core. That's a sure-fire way to reduce the latency of main memory accesses.

Especially in servers, the biggest cause of latency is unsustainable throughput. All this nitpicking over the additional latency of RDRAM might mean a few percentage points of performance in desktop systems, but it means absolutely nothing in servers.

<But I'm arguing that for almost any server application, it is DDR that has better MHZ to MHZ performance due to lower latency.>

No, in fact the performance differences between DDR and RDRAM in servers is inconclusive. It's not clear to me that DDR can use its bandwidth efficiently enough in a server environment to match the potential performance of RDRAM. Besides, the main reason DDR is being pushed over RDRAM is not performance, but cost. That's another debate, however.

But like I said before, the miniscule savings in latency that you get with DDR over RDRAM mean absolutely nothing in servers. Don't take my word for it, though. Take what MPR says about HotRail's upcoming 8-way Athlon chipset:

Perhaps the most significant problem with the HotRail architecture is the extra latency added by the relatively long path each transaction must take through the chipset. ... [However,] the company points out -- correctly, we believe -- that its advantage in sustained throughput for the whole system is much more important for most server applications.

So in short, servers care more about bandwidth than latency. If you can't sustain the bandwidth, then a miniscule latency advantage of DDR over RDRAM isn't going to mean squat. This is different from desktops, where sustained bandwidth is less important, meaning that latency becomes a bigger factor in performance.

Back to the original subject regarding Alpha EV7. Yes, 16 RDRAM channels, four per processor, does seem like an insane amount of memory bandwidth. But four RDRAM channels are more easily integrated onto the processor core compared to four DDR channels. And that integration will naturally lead to lower latency. Therefore, EV7-based servers will have the advantages of high bandwidth, low latency, and very sustainable throughput. (In fact, I feel EV7 can seriously challenge Merced/Itanium in terms of performance.)

Of course, EV7-based servers with RDRAM will naturally cost more than servers based on an equivalent amount of DDR SDRAM. I guess that's the price paid for the performance. If they decide to switch to four integrated DDR controllers, I'd sure like to know, since there are some major trade-offs to consider here.

Tenchusatsu
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext