SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Rambus (RMBS) - Eagle or Penguin -- Ignore unavailable to you. Want to Upgrade?


To: Gary Wisdom who wrote (4692)6/21/1998 6:45:00 PM
From: MileHigh  Read Replies (1) | Respond to of 93625
 
I think new news....

June 22, 1998, TechWeb News

--------------------------------------------------------------------------------
Fujitsu takes new track with high-speed FCRAM
By Andrew MacLellan

Silicon Valley- Taking what it calls a new direction in DRAM design, Fujitsu Ltd. today will unveil a core that should yield a device in 2000 that will run at twice the speed of the fastest DRAM chips entering next year's market.

The Fast Cycle RAM (FCRAM) design should deliver a bandwidth of up to 3.2 Gbytes/s, the company said. Developed by the Japanese chip maker with its U.S. subsidiary, Fujitsu Microelectronics Inc., the new approach is the latest in a series of efforts by memory suppliers to re-engineer their DRAM chips to speed internal performance.

"This is not a simple development," said Masao Taguchi, director of memory design at the DRAM division of Fujitsu Ltd. "This may represent a great change in DRAM history."

The FCRAM chip design differs from well-publicized efforts-such as Direct Rambus DRAM, DDR DRAM and SLDRAM-that aim to speed up data flowing from the memory interface to the central processing unit.

Rather than targeting the interface, FCRAM boosts internal performance by using a pipelined operation with nonmultiplexed addressing and additional on-chip precharge circuitry.

In a typical DRAM, the memory array is laid out in a grid and stores data in rows and columns. To retrieve information from this array, a row address strobe (RAS) locates the desired row, then a column address strobe (CAS) is launched to pinpoint which subarray block has stored the data.

The advantage of this approach is that the RAS can remain open during multiple CAS cycles, which reduces the overall memory cycle times for as long as data is drawn from the same row. But once a new row is required, the DRAM misses a cycle when it issues a new RAS.

By using a scheme that addresses the row and column simultaneously, Fujitsu said it has chopped random-access cycle times to 20 ns-down sharply from the 70-ns access times currently achieved by traditional SDRAM devices.

The design also reduces the number of sense amps that need to be selected to distribute the data, drastically cutting power dissipation within the core. FCRAM does this by homing in on a specific subarray block so that only selected areas of the core need to be activated at any one time, Taguchi said. This means that Fujitsu can ratchet up the clock frequency of its DRAM with relatively little effect on power consumption.

Fujitsu is now aiming its FCRAM for a 2000 market launch into multimedia and high-end computing platforms. But it likely will take several years before the core design reaches the production levels needed to make a run at the high-volume markets, the company said.

This is typical of new DRAM architectures, especially those with only one manufacturing source. They invariably carry a price premium and tend to go into niche markets rather than mainstream applications, according to Steven A. Przybylski, principal analyst at the Verdande Group, San Jose.

"In general, the market has been very unsympathetic toward efforts that simultaneously improve the core latency and raise cost," Przybylski said. "The market tends to prefer the minimum cost regardless of latency, except perhaps in niche areas."

Other memory suppliers have also tried to reduce the latency within a DRAM core, but they have had only low-volume results. Mitsubishi Electric Corp. and MoSys Inc., for example, have each added cache to their DRAM designs, using static columns to move data more quickly from the core. NEC Corp. also is developing its Virtual Channel Memory core with multiple 1-Kbit SRAM caches, enabling system components such as the processor or graphics controller to independently access the DRAM array via a series of dedicated logic channels.

Fujitsu's FCRAM employs no cache, opting instead to go with additional automatic-reset circuitry to permit pipelined operation. The three-stage pipeline method enables the core to cascade command input and decoding, sensing, and data-output operations by hiding the automatic reset behind the next cycle.

The problem is that, while bandwidth is boosted, the added circuitry also increases the size of the DRAM die by 30% to 40% over a comparable standard DRAM. Fujitsu is currently working to reduce this space penalty, but Taguchi admitted that 30% may not be acceptable.

FCRAM parts could be designed into high-end workstations and servers, but they will be aimed initially at such multimedia applications as digital TV, where the design's ability to retrieve image data rapidly will set it apart from other re-architected cores, Taguchi said.

Other new DRAM designs are using larger and larger caches to help avoid missed hits in some applications, but this approach is a drawback in multimedia, Taguchi pointed out. "The cache must be rewritten so frequently that it is not as effective," he said.

Copyright r 1998 CMP Media Inc.




To: Gary Wisdom who wrote (4692)6/21/1998 8:34:00 PM
From: RetiredNow  Read Replies (1) | Respond to of 93625
 
Bottom of the business cycle...that means that the networking industry is at the bottom of its business cycle as well.