SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin
RMBS 113.83-6.4%Jan 30 9:30 AM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: Tenchusatsu who wrote (47181)7/15/2000 5:04:19 PM
From: Bilow  Read Replies (1) of 93625
 
Hi Tenchusatsu; I see that Dan3 has already admirably replied to this post, but I thought I would add a few comments...

You wrote: "There isn't much of a difference in latency between sending all of the bits at once, and packetizing the data."

At 800MHz, the difference between 64-bits at once, and 4x 16-bits is 3 800MHz clocks or 3.75ns. I believe we can agree on these numbers, the question is whether 3.75ns is much of a difference... With a processor running at 1.6GHz, that is two clocks. How many instructions do the new machines execute per clock? It is obvious that 3.75ns added to every memory read is a substantial latency difference, surely enough to be visible in modern computers.

What's most important, the latency issue, in terms of missed instruction cycles, gets worse and worse as we go to higher speed processors, just the territory that Rambus tells us will be the promised land for RDRAM performance heaven. (And by the way, I thought that the promised land started at 500MHz, not 1.6GHz, or at least that is what Rambus was saying 5 years ago.

Funny thing about those religious leaders, their predictions always seem to be postponing themselves out into the future.

-- Carl
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext