SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Estephen who wrote (44083)6/11/2000 4:51:00 PM
From: jim kelley  Read Replies (1) | Respond to of 93625
 
RE: DDR is a dead end technology for the desktop.

I guess Carl must have missed a few issues in his analysis of DDR MOBO desktop designs. Just think how bad this gets if you perform the same analysis for DDR-II!

DDR is a dead end technology for the desktop. We have not yet seriously looked at it for mobile applications.



To: Estephen who wrote (44083)6/11/2000 5:54:00 PM
From: Estephen  Respond to of 93625
 
WHY DDR IS EXPENSIVE 3RD RATE TECHNOLOGY... AMD will never use it..

Processor Front-side Busses

dramreview.com

The processor front-side bus (FSB) has been losing the MHz race to the CPU, and badly.
The fastest Pentium III and Athlon CPUs are now running almost ten times faster than their
FSB interface to main memory. To avoid diminishing returns with increasing CPU speeds,
this ratio must be reduced.

Until recently, there wasn't much sense in increasing the frequency of the FSB, because the
memory system couldn't keep up anyway. Athlon has a 200 MHz FSB, but it can't do much
connected to 100 MHz PC100 memory. Only now, with the commercial availability of
RDRAM, and soon DDR, does a faster FSB make sense.

The math seems simple enough. A 400 MHz FSB, 64 bits wide will require 3.2 GB/s of main
memory bandwidth. That's two Rambus channels or a 128 bit DDR bus. But are either of
these a desktop product? What about the reality of fitting that many signals into a small
form factor chassis, a 4 layer motherboard, and cheap IC packages?

The memory system parameter that has the single biggest influence on motherboard
economics is the pincount of the North Bridge. This affects the die size and package cost of
the North Bridge, as well as the routeability and layer count of the motherboard. To compute
the number of pins needed on a DDR North Bridge, we first have to understand the cause of
one of the fundamental challenges of DDR imposed by its SDRAM legacy -- the 2-cycle
addressing problem.

2-cycle addressing problem

In a matrix topology like that used in SDRAM and DDR memory system, large numbers of
DRAMs are driven in parallel in order to obtain high bandwidths. Although this has some
benefits, one of the major engineering challenges of this architecture is driving the very
large loads placed on the address and control lines. Current SDRAM systems deal with this
by using 2-cycle addressing, where the address and control lines are driven for two full
clock cycles, allowing time for the signals to propagate and settle at a stable value.

You can find a timing diagram explaining this at www.hardwarecentral.com. More detail is
available in a timing analysis that Micron presented at the April '99 VIA technology forum.

In the Micron analysis, 32 loads per wire are assumed on the address bus. This is normally
two DIMMs with 16 devices each. Both VIA and Intel provide a separate copy of the
address lines on their SDRAM North Bridges for every two DIMMs, in order to ensure that
the loading does not exceed 32 devices.

In SDRAM systems, 2-cycle addressing works because the bandwidth of the address bus
and the data bus are matched. It takes four clocks for the controller to transfer a full address
-- 2 cycles for the row address and 2 cycles for the column address -- to the DRAM array,
and it takes four clocks for the DRAM array to transfer 32 bytes back to the controller.

This becomes a problem for DDR systems, which double the bandwidth of the data bus
while the address bus bandwidth remains unchanged. With DDR it only takes 2 clocks to
transfer the data, but it still takes four clocks to transfer the address. This makes the memory
system address bandwidth bound, resulting in a sustainable bandwidth for a DDR system of
no more than an SDRAM system.

The solution is to implement single cycle addressing in the DDR system. To do this, the
loading on the address lines must be decreased. This is exactly what buffered DIMMs do -
they decrease the loading on the address lines by providing a register between the North
Bridge and the DRAMs. This is an acceptable solution for servers, but for high performance
desktop systems it is unattractive because of the increased latency due to the buffer chips,
the added $15 in cost, and the 50% larger modules.

For desktop systems the preferred solution is to provide an individual copy of the address
lines to each DIMM, while simultaneously limiting the total loading to 16 devices per
module. This appears to be the most promising approach, although there are no systems
available yet to demonstrate that this allows worst-case timing to be met.

North Bridge pincount analysis

How many pins on a North Bridge are needed to support a 3.2 GB/s interface? Let's take the
easy case of RDRAM first. A single Rambus channel is supported by a controller macro in
the North Bridge which contains all the address, control, data, power and ground signals
needed. According to LSI Logic, this is 76 pins. A 3.2 GB/s memory system will require two
Rambus channels, or 152 pins.

To figure the total pins needed to
implement a 3.2 GB/s DDR
interface, let's start with a 64 bit
SDRAM interface. At their April
'99 technology forum, VIA
presented a routing schematic for a
PC133 system that breaks the
signals into two groups; data and
address. Using this as a starting
point, we add 8 more signals to the
data path so that ECC can be
supported, and then estimate
another 21 power and ground pins
at a 4:1 ratio.

The address group is doubled for DDR over PC133 to 40, in order to provide an individual
set of address and control lines to each DIMM. We are assuming that only 2 DIMMs will be
supported in order to enable single cycle addressing while still keeping the total number of
signals under control. Another 14 signals are dedicated to power and ground for the address
group, this time at a 3:1 ratio due to the higher currents needed to switch the heavily loaded
address bus.

Finally, there are a some clock and clock control signals that are sourced by the North
Bridge, bringing the total number of pins for a 64 bit DDR bus that supports ECC to 166. A
3.2 GB/s memory system will require all of these signals to be duplicated, resulting in 332
total pins.

Memory system summary

Both RDRAM and DDR memory systems will require 6 layer boards to route signals and fit
the modules into the smallest possible form factor. Loading constraints limit each system to
four modules, with a maximum of 16 devices per module. Using 128 Mb devices, this is a
maximum memory capacity of 1 GB per system.

The two types of memory systems
are remarkably similar, differing
only in the pincount of the North
Bridge.

AMD presented statistics at
HotChips '99 detailing the pincount
and die size for various North
Bridge memory configurations,
including SDRAM, DDR, and
RDRAM. Because North Bridge
components are generally pad
limited, the number of I/O's on the chip directly affects the die size. AMD's data showed that
a 64 bit DDR North Bridge has a die size 25% larger than an equivalent single channel
RDRAM device. Extrapolating from that data to include the additional pins required to
implement a 3.2 GB/s interface would indicate that a 128 bit DDR North Bridge would have a
die size approximately 60% larger than an equivalent dual channel RDRAM device.

Conclusion

Clearly, there is no SDRAM solution for the 3.2 GB/s of bandwidth needed for the next
generation of high speed front side busses coming from AMD and Intel. Willamette will be
paired up with Tehama, a two channel Rambus North Bridge. To remain competitive, AMD is
going to need an equivalent solution, either DDR or RDRAM.