SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: C_Johnson who wrote (25851)7/28/1999 2:55:00 PM
From: unclewest  Respond to of 93625
 
>>the Rambus article we released in the June Issue of our Monthly Letter<<

carl,
that was a good idea. thank you.
i think we all prefer to discuss rmbs in the wide open.
please note, there is no copyright or restrictive info anywhere in the report.
unclewest



To: C_Johnson who wrote (25851)7/28/1999 4:01:00 PM
From: Al Serrao  Read Replies (1) | Respond to of 93625
 
Carl, many thanks for sharing your report with us. Do you believe that if you had better data from RMBS or INTC sources that your report could have been positive? In other words, could your work be premature at this stage of the game? Thanks again.



To: C_Johnson who wrote (25851)7/28/1999 4:47:00 PM
From: Ian Anderson  Read Replies (1) | Respond to of 93625
 
Carl

I believe your comments on the performance of RDRAM are extremely misleading. There is a detailed analysis of the most common memory operation in PCs, fetching 32 bytes of data to the cache after a cache miss, at

usa.samsungsemi.com

This shows that this operation takes

PC100 90nS
PC133 75nS
Rambus 70ns

So Rambus is ahead by a nose on PC133 on the first operation.

What they don't go on to show in this paper is what happens when the next cache line is fetched. For PC100, and PC133, you have to send the column address again, so the next line takes the same time as the first. Rambus on the other hand does not need to send the address again. In fact it overlaps the next memory read with the data transfer of the previous data, so the next data is ready to clock out 20ns after the previous transfer ends, and transferring the data takes 18.75ns

so the next 32 bytes takes

PC100 90ns
PS133 75ns
Rambus 38.75ns

Rambus can sustain very close to double the throughput of PC133!

A 100% improvement will be available in Q3 with the Carmel chipset which supports two Rambus channels. You can't do that with PC133.

There is scope for a further 100% improvement if the speed of the RAM core is improved with smaller process geometry.

Please don't confuse the market with your half baked research.

Ian



To: C_Johnson who wrote (25851)7/28/1999 5:51:00 PM
From: J_W  Read Replies (1) | Respond to of 93625
 
Carl,

Thanks for posting your report on the thread.

Your report is strangely void of anything related to Timna. This may be due to time frames. While your report was released in your June issue, it was probably written sometime before. In any case Intel made a press release about Timna in May, so this information was known before the report was released. Ignoring late-breaking news should never been done by any advisory publications.

The Timna concept is a very important one. One of the reasons Intel went with Rambus is the low pincount. (In your report you refer to the pins as "high-speed data channels") Pincount is very important if the memory controller is to be moved from the chipset to the processor.

The ability to build low end PCs using minimal components is very desirable. Due to high pincount, putting an SDRAM memory controller on the processor chip is not a very good solution. In addition you must still use 8 SDRAM chips in a minimal system. With Timna a minimal system could consist of a processor chip with one RDRAM chip. Form factors for PCs would shink considerably.

This makes Sherry Garber's assumption that Rambus will only be used for "high-end system and server sales" to be in error, not to mention the 9% market penetration in 2003.

I would appreciate your comments on this.

Regards,

Jim



To: C_Johnson who wrote (25851)7/29/1999 12:38:00 AM
From: Alan Bell  Read Replies (1) | Respond to of 93625
 
As several other posters have responded, the granularity argument is a critical reason why RDram will become essential for low end systems. The following EEtimes article has two semiconductor companies expressing the importance for RDRam because of this reason.

techweb.com

An except -


Like Fujitsu, Toshiba is also looking to put 512-Mbit DRAMs on its road map. That density is particularly well-suited for Direct Rambus DRAMs because it will provide the right minimum granularity requirements, Toshiba's Kuyama said.

'Too big'

"One CPU, a chip set and one Rambus DRAM can make up one system with 64 Mbytes of memory," he said. "But in the case of SDRAM, it's going to be 256 Mbytes minimum for a x16 size, and that's too big."





To: C_Johnson who wrote (25851)7/29/1999 12:52:00 AM
From: Alan Bell  Respond to of 93625
 
Carl,

Again, thanks for posting your article.

Another important positive point that your article fails to mention is Rambus's substantial patent portfolio. For a number of years, they have been been innovating improvements to Ram technology as they designed RDRams, almost unchallenged. They have patented many new ideas along the way.

Even without examining the details, it is hard to imagine a new Ram interface technology that won't be hit by this intellectual property. This is particularly true for the narrow high speed bus area.

So if a competitor comes along with a new technology, they could easily find themselves paying a typical license fee of 2%. Rambus could get their money either way!

-- Alan