To: grok who wrote (42 ) 9/23/1999 8:49:00 AM From: Bilow Read Replies (1) | Respond to of 271
Hi KZNerd et al; Re RDRAM power consumption... It is obvious to me that RDRAM has a heating problem, it's a lot of peak power to put through a tiny chip. This is a problem that has already been faced with the CPUs, but everybody is already used to hooking fans up to them, not memory. It would be possible to reduce the average RDRAM power by leaving parts in NAP mode, but that doesn't help the worst case problem, which is what engineers have to design to. Only the heat "spreader" would do that. Reminds me of an ancient engineering story... There was somebody who worked on the Star (super computer back in the 70s) who told me that they had a little problem with a similar kind of overheating in their core memory. (Core took around an amp for about a microsecond to switch states in a tiny ferromagnet. Like DRAM, reads were done with sense amps, and were destructive.) The cores were in an oil bath. As long as memory accesses were random, the heating was spread around by the oil. But if someone wrote a chunk of assembler that whaled on a particular address, it could overheat that column of core and destroy it. Then they would have to cut all those tiny wires, replace the bad core in each plane, and reconnect the wires. So the factory solution was to detect when someone was pounding too hard on memory, and, in that event, reduce the system clock speed. Wonder if anyone has thought of doing something similar with RDRAM. The controller would automatically get an idea of the chip's temperature since it is supposed to measure this regularly. Incidentally, when an RDRAM is temperature/current calibrated, it causes the part to be unavailable for reading (and most memory accesses are reads) for Ttcquiet = 140ns. These cycles only happen once every 100ms, but are another pain in the butt for the memory controller designer. Sorry for wandering on. Time to decide whether it is time to go to bed or time to get up. -- Carl