To: Tony Viola who wrote (136079 ) 5/25/2001 5:26:49 PM From: tcmay Read Replies (2) | Respond to of 186894 More on supercomputers Tony Viola wrote: "Great summary Tim, about supercomputers. Still, I think that the majority of them in history that actually went into production from CDC, Cray Research, NEC, IBM, Fujitsu, Cray Computer, etc., were proprietary, designed from the ground up around custom gate arrays." I was talking about trends in the past 10 years, not what the majority of them were in history, though. Some of the companies you list ceased to exist as viable supercomputer companies more than 10 years ago. (CDC, for example.) And companies like Fujitsu, Amdah, CDC, etc. continued to produce machines which were code-compatible with their existing lines...long after the trends I cited became dominant. Markets are sometimes more important than methods. "-- the very hard to cool processors of the IBM mainframes...IBM had to develop the sophisticated "thermoconduction modules" with plungers and whatnot to cool the CPUs of their bipolar machines "Yes, water cooled machines were it in mainframes from the 70s to about 1990. As you point out CMOS just wasn't ready for big machines, for quite a few years of its existence, because it was too slow (actually very weak drive capability) and had terrible skew problems across multiple drivers." Actually, IBM's TCMs (no relation to me) were quite a bit more than just being a matter of water cooling. The "plungers" I referred to were the little gizmos that had to push up against the back of each bipolar chip. This was much-touted technology when the "Sierra" series of mainframes was unveiled in the late 70s. The 3081 was one of them. The QTAT (Quick Turnaround Time) line in East Fishkill was designed specifically for debugging and updating these lines of bipolar chips. CMOS was important, but even the NMOS micros could have been used in such parallel arrays (pace the 8080A arrays cited, and early x86 processors were used in other arrays). "At Trilogy, Gene Amdahl tried something just too far out: wafer scale integration he called it. Actually tried to make chips the size of wafers, maybe 5" at the time. Of course (we can say now) the odds of getting an "all good" "chip" were about as impossible as they would be today. Actually, it wasn't as impossible as it sounds, as they had lots of redundancy on the wafers, and planned to use discretionary wiring to hook up the good parts, but still, no dice (no pun intended). Well, the discretionary wiring on a piece of silicon 5" in size probably was nigh onto impossible also." We need to set the record straight so as not to give anyone the impression that Trilogy or Peltzer or Amdahl coined either the terms "wafer scale integration" or "discretionary wiring." These terms were much in the air in the mid-60s, 20 years before Trilogy. I remember reading some of the WSI and discretionary wiring papers when I was first at Intel. Before the microprocessor, that was the Holy Grail of how lots of silicon would get harnessed for computer use. I agree that wiring a wafer was then, and still is, non-economical. (It could be done today, but it's usually cheaper to dice the wafer and wire the good devices on PCBs or other substrates.) Eli Harari even called his chip company "Wafer Scale Integration," or WSI, before Trilogy was formed. "Good history diversion, Tim." Thanks. I seem to like writing these little history articles much more so than bashing current chips. There are a _lot_ of bits of history out there which are surprisingly relevant and timely. --Tim May