To: Investartist who wrote (30242 ) 9/23/1999 9:44:00 AM From: Dave B Read Replies (1) | Respond to of 93625
Investartist, It seems to me that the Intel article is something we need to have directly on the thread: ------------------------------------- Workstation applications put unique demands on system memory. By their very nature, workstation applications must quickly move very large amounts of data in and out of the processor. Data types such as 3D graphics, video, and animation, can be hundreds of megabytes?if not gigabytes?in size. Getting all that information out of system memory and into the graphics subsystem poses a real challenge. It is hardly surprising, therefore, that the performance of workstation software tends to be more memory bound than that of productivity applications. Working with industry vendors, Intel has adopted a solution that will enable high-performance memory access without adopting exotic and expensive memory technologies. Direct Rambus* DRAM, also known as Direct RDRAM, is a fast, serial memory technology developed by Rambus Technology. This technology, which will appear in upcoming Intel© Architecture-based workstations, provides an increase in memory bandwidth from 800MBps for SDRAM to 3.2GBps for dual-channel RDRAM. The RDRAM Difference RDRAM differs from today's prevalent synchronous DRAM (SDRAM) in that it uses a thin-and-fast serial connection to move data between the system and memory. Measuring 16 bits wide and ticking away at 800MHz, a single channel of RDRAM memory can provide 1.6GBps of throughput to main system memory. Today's PC100 SDRAM, by contrast, produces 800MBps over a wide, 64-bit connection running at 100MHz. The advantages go beyond bus widths and clock ticks. The efficient RDRAM interface means that the effective sustained bandwidth?as opposed to the mathematical optimum or "peak bandwidth"?is proportionally higher than that of SDRAM. "The protocol has been optimized so that sustained bandwidth can be up to 90 percent of the peak bandwidth," says Kuljit Bains, platform architect for the Intel Workstation Products Group. "If you take SDRAM designs and run the same application, your sustained bandwidth from the memory is about 65 percent." Not only does RDRAM provide headroom for intensive streaming media transactions, it also reduces latency by lessening the amount of time it takes to complete memory transactions. What's more, the channeled architecture of the RDRAM interface means that throughput increases as more channels are added. Intel©-based, workstation-class motherboards will employ two channels of RDRAM to deliver 3.2GBps of bandwidth. By contrast, signaling and timing issues limit clock speeds on the 64-bit SDRAM interface to about 133MHz. Above that level, the varying lengths and electrical loads of SDRAM bus pins make it difficult for system chipsets and memory to reliably distinguish signals. Widening the SDRAM bus is not an attractive option either, as a 128-bit interface would double existing pin counts on the memory bus to over 200 pins, requiring more power as a result. Managing Transitions Direct RDRAM will be introduced on workstation motherboards featuring the Intel© Pentium© III Xeon? processor in Q3 1999. These motherboards will run at 133MHz and support up to 4GB of RDRAM. The memory itself is packaged much like the dual inline memory modules (DIMMs) used with SDRAM, in a format called RDRAM inline memory modules (or RIMMs). The RIMMs and RIMM sockets will operate within the mechanical and thermal envelope of today's SDRAM designs, easing motherboard and system design. In order to ease the transition to RDRAM, Intel will provide compatibility with SDRAM. For those more concerned about standardizing on an existing memory platform, SDRAM compatibility will help you buy into the most advanced technology now and make the move to RDRAM later. One thing is for sure, advances in the workstation platform make the transition to RDRAM a compelling one. Consider the various interfaces that draw upon system memory and the bandwidth demands they make: I/O Subsystem Maximum Bandwidth Front-side bus 1GBps AGP 4X 1GBps 64/66 PCI 512MBps 32/33 PCI 133MBpsTotal 2.64GBps Table 1: The aggregate bandwidth of I/O subsystems demands a memory interface with bandwidth much higher than 800MBps available with today's SDRAM technology. Of course, it's unlikely that every I/O subsystem will be running at peak capacity at the same time. But the need to provide headroom, particularly between the processor front-side bus and graphics subsystem, is undeniable. As faster front-side bus designs emerge, it becomes clear that SDRAM cannot meet emerging performance demands.