SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Tenchusatsu who wrote (48964)8/4/2000 3:44:13 PM
From: jim kelley  Respond to of 93625
 
Ten

It would be interesting to know if Intel ever completed the specification for DDR-200 in servers.

Also, it is clear from the presentation notes that Intel plans to use RDRAM in the 1 to 15 GB storage range begining in q2-01.

It is interesting to note that we have been FUD pounded-on with "DDR is nigh" message for the last 6 months and it looks like they can not deliver it in quantity on the desktop until q2-01 if then.

I'm looking forward to the P4 launch!



To: Tenchusatsu who wrote (48964)8/4/2000 3:48:01 PM
From: Ali Chen  Respond to of 93625
 
<By the way, the foils are indeed from Spring IDF 2000. The date on the first slide is probably a typo>

The foils are indeed from Spring 2000,
but the first slide date is a revelation.
It means that Intel is touting the one-year-old
crap to people, with the same outdated concepts
and data in support.

The DDR200 is an utter nonsense. As you said yourself,

"..we are talking about the difference between core latency of the DRAM and average latency that the processor sees. The latter is what ultimately matters. Core latency does have an effect on average latency, but it is only one factor."

If intel suggests to go with DDR200, they are
talking about FSB at 100MHz, so the performance
will suck big. Clean experiments published
by Tom Pabst has shown that PC133 beats or equal
to DRDRAM, which proves that FSB133 is much more
important than FSB100. Just like you said :)

So, Bilow is correct - the presentation is some
old sloppy crap. Sorry to unfold it for you :)



To: Tenchusatsu who wrote (48964)8/4/2000 5:39:17 PM
From: charred water  Read Replies (2) | Respond to of 93625
 
Tenchusatsu, re: core latency

In this case, average latency is highly dependent on core latency, where PC133-based chipsets do very well

The chipset may be more optimized, since it is a 'nth' generation chipset for SDRAM, compared to a 1st generation chipset for RDRAM in the 820.

However, the main latency discrepancy is probably that the chipset is benchmarked with the top binsplit of PC133, that is 2-2-2 which has 15ns tRCD and 15ns tCAC. The PC133 that is apparently shipped with all the OEM boxes is 3-3-3, which has roughly 150% more latency.

The RDRAMs shipped with systems have the equivalent of 22.5 ns tRCD and 24ns tCAC. Nothing prevents faster cores from being used with RDRAMs, but it appears that the bulk of production for SDRAMs and RDRAMs do not yet meet these core speeds.

A more apples-to-apples comparison of the 815 and the 820 would use PC133 3-3-3 SDRAM, representing the systems that are actually being shipped.

It is true that knowledgable PC users can obtain the pc133 2-2-2 parts for a small premium. It is unlikely that the majority of consumers would do this.



To: Tenchusatsu who wrote (48964)8/4/2000 5:46:08 PM
From: blake_paterson  Read Replies (1) | Respond to of 93625
 
Any comments on the following points, anybody?

messages.yahoo.com

"Well, one nice thing about the communications apps is that their stuff is (A) very proprietary [they probably only have one or two fabs making their PCBs], (B) very controlled [they only have one design for the PCB], and (C) they can solder down parts, and need fewer of them. At least, I think they're generally looking at 32-64M of RAM on most of those. It is amazing how much difference this makes in terms of ease of design and stuff. That's partly why we have reliable 366Mhz DDR on video cards [soldered down, reference design], and not on motherboards. High-volume manufacturing from 20 different motherboard houses, with varying levels of quality, but each doing darn near anything they can to save a few pennies per motherboard, is REALLY REALLY hard when you have tight signal tolerances and you have guaranteed discontinuities in the signal paths [connectors!].

The biggest advantage I see in the RAMBUS technology is the fact that the maximum bandwidth is available with a single chip. To get 1.6Gb/sec from DDR you would need a 32-bit wide, 200mhz [400MT] DDR chip. Although they make such things, it does have significantly more pins than the RAMBUS chip does.

This is one of the reasons you would want it in a PC as well, because a standard chipset doesn't have much [logic] to it, and if you were to put a 250-pin simple chipset device on a 0.18u process you would be entirely pad-limited -- a waste of manufacturing resources. That's partly why a lot of manufacturers are integrating more and more stuff onto their chipests [like 3d accelerators, LAN, etc]. I wouldn't be surprised if we see a lot more UMA-type devices [ala XBOX] with extremely high-performance memory subsystems and CPU controllers. Lots of advantages."

BP



To: Tenchusatsu who wrote (48964)8/5/2000 6:12:31 AM
From: Bilow  Read Replies (1) | Respond to of 93625
 
Hi Tenchusatsu; I went back through the document, and it provides no reasons why DDR shouldn't hit the server market next year. The biggest digs they have are that some of the DDR specs are still being changed. All it is is the memory makers makers and chipset people honing the specs so that they get the maximum yields and bin splits. Intel makes a lot about these, but they are not that big of a deal. If that were the worst thing designers have to put up with, my designing life would be total heaven. In fact, didn't Intel modify the RIMM specifications a few months ago and issue a big press release about it? The time to make these kinds of changes is before production starts, and that is what they are doing.

Hey, Intel is supporting DDR in servers, what more needs to be said about that?

But about latency... I still don't like Intel's tendency to confuse the latency issue by hiding it in a measurement that is basically one of bandwidth, but they seem to have succeeded in changing the discussion from one of "latency and bandwidth" to one of "average latency". In some sense this is good, since this is what the system really sees. But they've been using it to beat up PC100/PC133 with the RDRAM club. RDRAM needs to be compared to DDR, not SDRAM.

Of course the "average latency" of PC133 bites when it is being run at close to its bandwidth limit. This statement applies to every memory technology ever. If they had run their graph up to where RDRAM bw limits it would have gone through the roof, too. DDR does considerably better, since its bandwidth limit is up around RDRAM's. But the basic fact is that Intel is going with DDR for servers.

I believe that the current situation is one of memory bandwidth not being a critical bottleneck for system performance for single processors. I have no doubt that this is why Intel is accepting being forced to use SDRAM in the P4. Memory bandwidth not being the major system bottleneck may now be the official Intel line. Did you catch this quote from Gelsinger:
"The high cache memory [included] with Pentium 4 ameliorates the [data rate] difference with SDRAMs. It makes a slow memory look fast." #reply-14168269

In other words, because of cache, DRAM bandwidth is not a system performance bottleneck, and therefore the very high "average latency" that SDRAM gives at high bandwidth does not obtain in real systems. (Of course anyone can write a synthetic benchmark and prove most anything.) That the FSB of the P3 matches the bandwidth of PC133 SDRAM is not a big coincidence, nor is it an indication that both bandwidth limitations are significant bottlenecks.

With the P3 and a 133MHz x8 FSB (and a single processor system), it is obvious that SDRAM provides sufficient bandwidth, but the P4 is another story. Maybe Intel's decision to support SDRAM in the P4 is an admission that neither the DRAM bandwidth nor the FSB bandwidth was a significant (i.e. 50%) system performance bottleneck.

One thing to note is that situation is completely reversed in the Nvidia GeForce chipsets, which are totally memory bandwidth bound (and are DDR). This is proved by the overclockers, who get no performance gains by overclocking the processor, but nearly 1 to 1 performance gains by overclocking the memory. With x86 processors, the tendency is reversed. Overclocking the memory provides almost no performance gains, while overclocking the processors is huge.

My guess is that if we could clock current processors at around 2 to 3GHz they would bottleneck at memory bandwidth vs CPU clock 50% each. That is, at around that speed, overclocking either the memory or the processor by a small percent should give a system performance improvement of about half that. But that is a long ways away, except in multiprocessor systems. DDR should take us to around 5GHz or so.

Comments?

-- Carl