Intel Investors - Tuzigoot and Huchuca are not former Communist Block countries.
These were the code names of two new i960 processors that Intel has developed for the embedded processor market - specifically for fast I/O on PCI peripherals such as RAID controllers and Fibre Channel controllers.
Looks like Intel will be keeping the older 0.35 micron Fabs busy !
Paul
{==========================================} techweb.com August 10, 1998, Issue: 1020 Section: Embedded Systems
i960 core upgrade keys Intelligent I/O Bernard Cole
Intel Corp. has announced its second-generation processor for Intelligent I/O, ending months of speculation that ranged from StrongARM to a 64-bit version of the i960 to Intel's use of Alpha.
Choosing evolution, Intel called on its existing 32-bit i960 core once again, surrounding it with totally revamped hardware: fatter internal buses, more and deeper queues, hardware acceleration of key I20 functions, and support for 66-MHz synchronous DRAMs. The new processor will be customized to specific I/O applications with Application Accelerator Units (AAUs) that effectively increase the I/O processor's bandwidth four-fold. The first incarnation of this idea will be a RAID-optimized i960, whose AAU includes a hardwired XOR engine on the I/O processor's internal bus.
"We plan to continue this strategy in future implementations," said Alan Steinberg, director of strategic marketing, Connected PC Division, Intel Corp. (Chandler, Ariz.), "optimizing the overall architecture wherever possible and including accelerators for specific I/O applications."
Some major server vendors who were involved in the architectural development have been performance-testing two chips based on the new architecture since April. The chips, Tuzigoot and Huchuca, since renamed the i960RN and RM, should ship in volume later this year, coinciding with announcements from major server OEMs of systems based on the devices. OEMs are reporting up to 80 percent improvement over systems using the earlier i9060RD.
The difference between the two processors is that the RN has 64-bit, 33-MHz primary and secondary external PCI bus interfaces while the RM has 32-bit, 33-MHz PCI buses. The two chips share performance enhancements, including:
1. A 64-bit, 66-MHz internal bus (vs. 32-bit, 33-MHz) capable of data throughputs as high as 528 Mbytes/second;
2. A second-generation PCI-to-PCI bridge supporting multiple read-and-write queues;
3. Primary and secondary Address Translation Units (ATUs) enhanced with additional and deeper queues;
4. DMA controllers with deep asymmetric queues;
5. A Performance Monitor Unit that enables dynamic performance management and tuning;
6. A memory controller with support for 66-MHz SDRAM across a 66-MHz, 64-bit memory interface with Error Correction Code (ECC) capabilities; and
7. A clock-tripled 100-MHz Intel 80960JN processor core.
"The new architecture provides a two- to four-fold increase in bandwidth on all bus interfaces," said Steinberg, and is designed to eliminate I/O bottlenecks that have emerged as OEMs designed systems based on the first-generation i960RP. Many of these bottlenecks came up when systems combined network and storage functionality and RAID server storage caching using SCSI, Fibre Channel and Ultra-Wide SCSI and Low-Voltage Differential Signal (LVDS) SCSI technologies. In addition to the shift to a 64-bit 33-MHz external PCI interface, the new chips incorporate both a primary bus and an electrically isolated, secondary PCI bus. The use of the dual PCI bus will solve a number of problems for server OEMs, Steinberg said, including reducing bus congestion; extending PCI expansion slots beyond the 10 electrical loads per bus (five PCI cards) allowed by the PCI specification; and peer-to-peer data transfers without host-processor intervention.
The internal 64-bit, 66-MHz bus connects to the processor core through a bus interface unit and supports read-, write- and prefetch buffering to improve throughput. Write merging, write buffering and instruction prefetching logic include optimizations to enhance performance onto the internal bus.
The PCI-to-PCI bridge in the new design has deeper queues and supports 64-bit dual-address-cycle (DAC) PCI operation on both the primary and secondary PCI buses (either PCI bus may be independently programmed to operate in 32-bit mode). What this means, said Lisa Hambrick, market development manager for I2O Products, Connected PC Division at Intel, is that data traffic between the I/O processor and the I/O control chip does not affect the ability of the host to access other devices on the primary PCI bus-increasing the effective bandwidth of the primary bus. This is significant because typical server applications may include several I/O controllers on the secondary bus.
The bridge also features five queues for upstream transactions, optimized for intelligent I/O data flows, and a write queue that has been expanded to 256 bytes. "This will allow posting of up to eight transactions, which are completed by the bridge in the order each is posted," she said. "In addition, the bridge supports streaming, allowing data to enter the bridge from one PCI bus while data is simultaneously written to the other bus." Another noteworthy enhancement is dual 128-byte read queues that allow the processor to support two simultaneous upstream read requests.
The new data-flow architecture also includes a pair of 64-bit-wide address translation units (ATUs) that bridge between each PCI bus and the I/O processor's internal bus, providing high-throughput data paths to the processor.
Copyright r 1998 CMP Media Inc. |