SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Alan Bell who wrote (5617)7/12/1998 3:29:00 PM
From: NightOwl  Read Replies (2) | Respond to of 93625
 
Alan, Guy and all,

Thanks for your comments although I have to say Alan that most of what you say has left me in the dirt - so to speak.

From what you say I gather that the RMBS interface is the central advantage. The means to connect being the important thing? If so, I would certainly agree.

On the question of latency I still have more Q's than A's. If I understand Alan's comments on this point, I think I agree. L1 and L2 caching is still the method of choice to solve the problem of latency. And then there is always the possibility of designing main memory with a direct link to the CPU. Something like a giant L1.

But those options don't seem viable in the intermediate term assuming the CPU's keep getting faster, quicker. The L1 and L2 caches seem to be getting out of hand size wise at the upper end of the PC market. And connecting directly to the CPU would seem to raise major issues for the entire motherboard in so far as peripheral connections and timing are concerned.

Because, as Alan points out, the interface control is the major concern; neither INTC nor RMBS may care how latency is dealt with.

Whatever way the latency solution goes, it seems to me that RMBS will get a major portion of the PC market. Barring some unforeseen performance boost in the SyncLink design (SmartMod shows numbers for that bus that are good, but they aren't DRDRAM killers) I have to assume RMBS' control of the main memory interface will be practically unchallenged when CPU speeds are 600+MHz. As a result, why should RMBS care whose latency solution is adopted? They can go whichever way the industry chooses because everyone's solution will have to go through their bus. Unless I am missing something. Which is entirely possible.

I suppose the industry, being what it is, will "choose" only when forced to do so, which I assume is when we users have CPU speeds or robust apps that demand the latency problem be addressed.

Ah! I see I have come full circle. Which is as good a reason as any to go back to lurking. I will be following your developments and may drop in again if no one minds.