Hi all; Re network processors, and embedded in this weeks EE-Times. Reading the tea leaves in the network processor arena... the wizened old lady whispers: Embedded! I can see an embedded DRAM in your future!
I posted some time ago that network processing is going in the direction of embedded, rather than RDRAM (or DDR for that matter). NightOwl found this article, and posted parts from it at: #reply-14250288, but I didn't realize what a great article it was till I got my print copy. Must be something about the glossy paper.
If you are interested in DRAM and the communications industry, or the future of general purpose memory, this is a great read. Some succulent extracts for your enjoyment:
One key to Entridia's design is its memory architecture, which features an on-die cache using a 128-bit-wide bus. Several other network processors also have used wide memory buses to increase on-chip data transfers. ... Indeed, the key to Silicon Access Networks' approach is not just the on-die cache but the use of memory blocks with exceptionally wide custom DRAM, which allows for faster communication within the chip, O'Connor said. Besides 1.2 Mbits of SRAM, with a width exceeding 2,000 bits, the iFlow packs 52 Mbits of DRAM, in three different configurations. Two Mbits feature blocks 256 bits wide running at 133 MHz; 25 Mbits run at the same speed but are 100 bits wide; and the final 25 Mbits are slower, running at 66 MHz, but are 3,200 bits wide. O'Connor said that the total aggregate bandwidth for the memory is 252 Gbits/s. [i.e. 31GB/sec, or equivalent to almost 20 PC800 RDRAM channels.] ... Peter Glaskowsky, senior analyst for multimedia at MicroDesign Resources, said embedded memory has some inherent advantages because on-die buses can be created that are much wider, and therefore much faster, than the buses that link separate chips. Using custom DRAM can also allow designers to create bus widths that closely match the size of packets, allowing the systems to process a packet with every clock cycle. ... In the networking segment, where the packets flow off-chip as fast as they come on and where little processing is required, embedded memory is likely to become a more common technology, he predicted. [I agree.] ... Shelat said the design can support as much as 58 terabits/s of aggregate memory bandwidth [i.e. 7250GB/Sec] and can support OC-192 network speeds. Implementing up to four of the chips together can support networks running at OC-768. ... "Lookup is a very, very real issue," Ramankutty said. "This is where the bottleneck can occur. The key to solving this is low latency." techweb.com
The big distinguishing difference between the graphics memory designs, and networking memory designs is the absence of the "Carl constant" that forces graphics designs to have a ratio of bandwidth to memory size of about 60 to 120Hz. Networking memory can use a lot more bandwidth. The reason is that network memoriese aren't designed around human limitations, while graphics memories are.
-- Carl |