SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Texas Instruments - Good buy now or should we wait?
TXN 159.35-1.8%3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: johnny boy who wrote (3070)2/14/1998 10:13:00 PM
From: TREND1  Read Replies (2) of 6180
 
Will TXN keep up in the Dram race ????
I have my doubts !
Read at least the two bold parts of this article.
I do not see TXN in this article !
Larry Dudash
SDRAM detour defined for PC's Rambus route
David Lammers and Anthony Cataldo
San Francisco - The road to bringing 800-MHz Direct Rambus memory
technology into the PC mainstream should become clearer at the Intel
Developers Forum, set to begin tomorrow in San Jose, Calif. Intel
Corp. is expected to describe a plan to put synchronous DRAMs on a
100- or 133-MHz Rambus module, effectively making SDRAMs mimic the
Rambus architecture. At the same time, Intel is considering adding a
66-MHz SDRAM specification to its soon-to-be-announced 440BX chip
set, which initially earmarked only 100-MHz SDRAMs.
Intel is being pushed to adopt contingency plans as plummeting SDRAM
prices make it more and more unlikely that Direct RDRAMs-with a
larger die size, more expensive packaging and testing, and Rambus
royalty fees-will gain a quick foothold in the mainstream desktop
market (see Feb. 2, page 4). And with sub-$1,000 PCs gaining in
popularity, Intel must guard against rivals armed with lower-cost
memory solutions and chip sets.
Originally, Intel planned to move the PC industry from 66-MHz to
100-MHz SDRAMs in early 1998, followed by a quick shift to 800-MHz
Direct RDRAMs in 1999. But even as Intel engineers acknowledge that
some delay may occur in the Rambus program, all the reasons that
originally led the company to support the Rambus approach, including
the smaller pin count and much higher typical bandwidth, remain in
force.
The hybrid SDRAM/Rambus interim solution involves the use of the
fastest available SDRAMs on a Rambus in-line memory module (RIMM).
Intel would add a transceiver IC on the RIMM to translate the data
output of the SDRAM to a Rambus ASIC cell, or RAC, on the control
logic, several sources said here at the International Solid-State
Circuits Conference last week. SDRAMs normally talk to a 64-bit data
bus, while the Rambus approach uses a 16-bit bus, or 18 bits when
error-checking control (ECC) is used.
Critics of the approach said the extra logic would add cost, as well
as putting a delay of as much as 10 ns between the memories and
processor.
At the same time, Intel is expected to build hooks into its
next-generation memory controller-the 440BX-that would allow PC OEMs
to use different speed grades of both SDRAMs and Direct RDRAMs.
Intel originally planned to support only its own version of the
100-MHz SDRAM specification, known as PC/100, for its forthcoming BX
chip set tailored for the 100-MHz system bus. But now the company is
considering adding the ability to program the memory bus to run on
66-MHz SDRAMs as defined by Jedec, sources said.
Intel apparently has a strong incentive to include the 66-MHz
specification. Sources in the DRAM manufacturing business said the
microprocessor giant has qualified only a handful of DRAM companies
for the PC/ 100 specification, and that it fears the short supply
will drive DRAM prices up considerably.
One option would be to delay
the introduction of the new chip set, now set for April. Intel has
done as much before, but any delay could spark a backlash from OEMs
eager to upgrade their systems to the 100-MHz bus.
Including the lower-frequency specification would be seen as a
compromise that is consistent with past chip-set introductions.
"Historically, chip sets for new architectures have been
backwards-compatible," said one source.
For the short term, the attention is on costs. For the Rambus
program, costs are incurred by the use of micro ball-grid-array
packaging, and by the need for expensive testers that can handle the
on-chip Rambus interface logic. Then there's the die-size penalty.
The Rambus license and royalty fees also must be factored in.
On the plus side, the narrow interface used in the Rambus approach
requires fewer pins than the 64-bit bus in the SDRAM domain. Saving
pins on the controller means the spares can be used for graphics and
other future needs. Intel has been vocal in its intent to stick by
the Rambus architecture, and memory manufacturers have Intel's
market clout to wring out costs and establish themselves quickly.
Steven Przybylski, principal consultant at the Verdande Group (San
Jose), said the Direct Rambus memory uses a relatively large number
of banks (16) and has a wide internal data path (144 bits).
"Individually, each would be less of a problem," he said. "Taken
together, the combination presents a challenge" in terms of reducing
the Direct Rambus die size.
Whatever the delays, Intel's decision to go with Rambus is a sound
one, said Peter Song, senior analyst at MicroDesign Resources
(Sunnyvale, Calif.). "The Direct Rambus approach supports both
control and data transfers, while the SDRAMs and DDR
[double-data-rate] SDRAMs have only a data bus," Song said. "Rambus
is next-generation memory technology, and to reduce the costs takes
time."
Dave Mooring, vice president in charge of Rambus' personal computer
division, acknowledged that the fast decline in SDRAM prices has
increased "the delta" between SDRAM and RDRAM prices. But that does
not mean the Rambus program is behind schedule, he maintained.
Mooring said seven of the Rambus memory licensees have a die-size
penalty of 10 percent or less, and that reports of 20 or 30 percent
penalties were limited to a few memory vendors only.
Since engineering samples of the Direct Rambus silicon are expected
this summer, hard data won't be available for months. But for now,
Rambus and Intel-as well as the Rambus stock price-took cheer from a
demonstration that the Rambus interface technology delivers on its
promise.
Demos at ISSCC
At the International Solid-State Circuits Conference, Intel and
Rambus engineers showed a test board with a 2.6-Gbyte/second
chip-to-chip interface, using Rambus interface logic. When three
Rambus ASIC cells were put on a controller IC atop a test board
implementing the Rambus channel, the test board confirmed a
data-transfer rate of 800 Mbits/s. Because error correction was
used, the demonstration involved 1-Gbit/s signaling over 26 wires,
distributed as 18 data wires (2 bytes, with error correction) and
eight control wires.
The three RACs on the controller were "an Intel-specific test chip
set," the companies said, but the technology could work across a
wide variety of digital ICs, including consumer-electronics devices,
DSPs and so on. The channel between ASIC and DRAM, as described at
ISSCC, "uses microwave design methodologies for maximum interconnect
bandwidth. The channel consists of the megacell and potentially many
DRAMs. . . . The electrical behavior of such a network is much like
that of a microwave filter, exhibiting characteristic passbands and
stopbands. The PCB design rules and DRAM packages minimize cost
while maximizing overall channel bandwidth."
The result, according to the ISSCC paper, "is a transmission system
with controlled impedance and propagation (attenuation and phase)
required to transport signals at 800 Mbits/s."
Mike Allen, principal engineer at Intel's PCI components (chip set)
division in Folsom, Calif., said the prototype Direct RAC chip
demonstrated that multiple Rambus channels can be implemented on an
ASIC.
"The attraction over the longer horizon is that we get more
bandwidth per pin," Allen said. "Compared to 100-MHz SDRAMs, we need
only half the number of pins, and the Rambus approach delivers eight
times more total bandwidth."
The current 440 LX chip set from Intel, which supports today's
66-MHz SDRAMs, requires 492 pins. Allen said the Rambus approach
requires about 75 pins to implement a 1.6-Gbyte/s channel, and "we
are within the pin budget to add another channel."
Also, Allen said the use of FR4 materials for the demonstration
board, and the use of cost-effective 5-mil lines and spacings, will
translate into system-level savings.
"With SDRAMs running at 100 MHz, we get about 100 Mbits per signal
pin, or 800 Mbytes of bandwidth," Allen said. "We think that with
SDRAMs, effective bandwidth is limited to 40 percent, compared with
about 90 percent for Direct Rambus DRAMs. So moving to Rambus is a
fairly good next step."
SL-DRAM gains backers
For those who resent the Rambus license fees, the hope is that
low-cost SDRAMs and DDR SDRAMs can meet desktop needs until the time
when the competing Synclink approach is better established. Micron
Technology (Boise, Idaho) is expected to have a prototype SL-DRAM
ready by midyear.
And Siemens is expected to manufacture a design
now under way at Kanata, Ontario-based company Mosaid, with
financial support from the SL-DRAM consortium. This open-standards
organization, which recently incorporated as SLDRAM Inc., has as
members DRAM and system companies, including Apple Computer and
Hewlett-Packard. At least two system vendors are actively working to
develop systems using SL-DRAMs.
At ISSCC, a paper from three of the SL-DRAM consortium members
demonstrated synchronization and timing techniques that could
deliver 1.2-Gbit/s transfers. Engineers from Mitsubishi Electric,
Hyundai Electronics and IBM Microelectronics collaborated on the
paper, which described the "vernier," or synchronization, technology
on the logic interface circuit. The paper told of a 300-MHz device
with 600-Mbit/s pin data rates, somewhat less aggressive than the
Intel-Rambus target of 400-MHz operation and 800-Mbit/s data
transfers.
SL-DRAM proponents argue that while they don't promise to deliver as
much bandwidth as Direct RDRAMs, their approach is less costly and
stays in line with the industry convention of adopting new memory
technology in incremental steps.
Copyright (c) 1998 CMP Media Inc.
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext