SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Qualcomm Moderated Thread - please read rules before posting
QCOM 174.810.0%Dec 26 9:30 AM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Jim Mullens who wrote (109676)2/18/2012 11:16:16 AM
From: pheilman_2 Recommendations  Read Replies (1) of 197036
 
dual channel memory bandwidth

Turns out CPUs and GPUs have very different demands for memory, and memory can only be configured one way.

A CPU needs low latency because it needs a new piece of data now to keep running. The new piece of data is located pretty much randomly through out memory. So the memory needs to be setup for low latency and bandwidth is not so critical. CPUs don't really care about bandwidth, but latency slows them to a crawl.

A GPU needs massive bandwidth. Each pixel in the new frame is going to be touched several times before the 60 Hz update. The next piece of data is adjacent to the most recent piece of data requested. The memory holds the last accessed bank "open" and when the GPU requests again the data is available very quickly. The trade-off is that a random access is quite a bit slower.

The memory chips are configured at start up to either keep the banks open for bandwidth or close the banks for lowest latency. Merging CPUs and GPUs doesn't actually pay off due to the memory behavior, didn't work for AMD/ATI and won't work for NVIDIA, unless there are two memory controllers that can be set for the requester.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext