SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: carl a. mehr who wrote (164386)4/21/2002 1:01:03 PM
From: tcmay  Respond to of 186894
 
"Great response from both you and Latham. I guess the big challenge is for the software boys to 'build' many parallel paths and I believe that becomes task specific.

"Hope to see you at the upcoming meeting on May22...humble carl "

Well, every one of the fastest computers on the list (Jack Dongarra's list of fastest machines) is now a parallel machine. This has been so for many years.

A seminal paper was "Attack of the Killer Micros," circa mid-to-late-80s. It predicted quite correctly that uniprocessor machines, even those with vectorized paths for some code, were topping out about 1 nanosecond cycle times. Speed of light delays and all.

(The canonical ultrafast uniprocessor would have to be very small, would have to dissipate a lot of heat in the small volume, and would have to many wires going in and out. Ergo, it was dubbed "the hairy, smoking golf ball." The last serious attempt to build such a machine was Seymour Cray's GaAs machine with ultra-precision wire-bonding. It stalled, funding was cut, then he was killed in a car wreck. Interestingly, the last project he was working on when he died was a parallel cluster of Intel processors.)

Intel's Personal Supercomputer, the Sun clusters (using technology based on Cray Research and Floating Point Systems products), the IBM machines, etc. and so forth, are all based on various topologies linking hundreds or thousands of processors together.

The idea is not new, either. One of the first seminars I went to as a young engineer was a presentation by engineers from "IMSAI" on their proposed Hypercube machine: a collection of N Intel 8080A processors linked in 4 or 5 dimensions. This was in 1975. (IMSAI later went on to gain fame with the IMSAI 8080, the first really usable personal computer, an improvement on the Altair.)

Parallelizing code is now relatively straightforward. Especially for large scientific calculations, which are already highly parallel (matrix calculations, fluid dynamics, code-breaking, protein-folding, quantum chromodynamics calculations, etc.).

Such clusters are common on college campuses, where they are usually based on the "Beowulf" design/protocol. (Some friends of mine won the RSA challenge by using a Beowulf cluster of Intel or AMD chips at Berkeley.)

More tightly-bound clusters, like the hypercube architectures, are also prodigious consumers of high-end CPUs.

It's not as far-out, in other words, as it may sound at first.

On the May 22 meeting, I've heard nothing. Paul is the usual ringleader, and he's MIA. I'm not sure if I'll make the drive over in any case...the annual meetings rarely carry information.

But maybe if there's a lunch afterwards...

--Tim May



To: carl a. mehr who wrote (164386)5/13/2002 12:04:12 PM
From: denni  Read Replies (1) | Respond to of 186894
 
>>Hope to see you at the upcoming meeting on May22...humble carl

dream on!

be there or be square.

denni