SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : How high will Microsoft fly?
MSFT 476.93+0.6%Nov 25 3:59 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: w2j2 who wrote (27824)8/3/1999 9:16:00 PM
From: Bilow  Read Replies (1) of 74651
 
Hi Walter Jacquemin; Not really OT. If reconfigurable computers become the norm, it would probably mean that companies other than MSFT would take the lead in compiler design &c. Re that Sci. Amer. article on Oxygen computing at: sciam.com

I think that, by and large, that the above is not the future of computers. <<<Deleted insult to academia went here...>>>

That said, I have some very smart friends who disagree with me on this one. Of course, they have never been paid to design supercomputers. Nowadays, we all use reconfigurable logic. Xilinx's 1M-gate FPGAs are in use all over the place. Me and a buddy are working on projects at three different companies using them. They are great, particularly when you are making small numbers of something that is supposed to have a fast time to market. But as soon as volume production kicks in, the FPGAs get replaced by custom, if there is sufficient volume of shipments, and the design is stable. But is the direction of some future generation computers going to go in the way of that sort of software alterable hardware?

I don't think so. The reason has to do with raw efficiency. Software alterable hardware takes about 16x the die area, power consumption, propagation delay, heat generation, etc., of dedicated hardware. That proportional constant has remained steady through 8 generations of Field Programmable Gate Arrays. The proportionality constant is not changing now, and will not change in the future.

Computer that are reconfigurable have existed for 30 years, if you consider micro-coding to be a form of reconfigurability. Some microcoded machines allow the system level programmer to write special sections of microcode. So if you, for instance, need to do a lot of high level arithmetic, you could add a microcoded logarithm function, for instance. Being microcoded, it executes much faster than the equivalent sequence of high level instructions.

Of course, all that is a real hazard to the operating system, and not very many people availed themselves of that opportunity, even thought the machines that allowed that were quite common. (I think the IBM 360 series was remicrocodable, for instance.)

That said, there are problems that are relatively obscure, and can be usefully performed by reconfigurable hardware. But writing a compiler for it is a nasty.

But the vast majority of the things normal people want to do with every-day computers is amazingly predictable. Computer designers tune their instruction sets to optimize for the speed of execution of particular uses. So the vast majority of the instructions executed by a computer are of a very predictable type, and need not be software wired.

The next big challenge is in the use of multi-processors. I don't think that reconfigurable hardware is the answer to getting more silicon working on a problem. The reason is that 16x inefficiency. This is just too big of a factor to overcome. Instead, multithreaded code, is the way to go.

I expect to live long enough to see microcodable general purpose computers (like the Pentium) replaced with hard wired machines of much higher efficiency. This will come about when the instruction sets and designs stabilize, maybe 10 years out.

Few people will agree with me on the above statement. That's okay, people who look to the past always run off the side of the road as the future moves forward. First of all, eventually CPU requirements will stabilize. When that happens, hard wired control logic for the CPUs will be much more efficient, though harder to design. But the company that manages to get it right will have the cheapest, most efficient, fastest and best processor.

Of course people currently in the industry will argue with this. I am reminded of the hardware V.P. of the obscure supercomputer maker, SCS, who ran their technology into the ground. He insisted that VLSI could not be used to build a supercomputer, due to the inevitability of minor design errors making the VLSI chips worthless. He would stand up in front of 20 very smart hardware engineers and defend his position. Of course SCS ended up making supercomputers that got destroyed in the market place (in terms of dollars per floating point operation per second) when other companies managed to figure out how to design VLSI chips that worked. The same competition will apply in the future. In order to force a design revolution to fully hardwired (i.e. completely unreconfigurable and unmicrocodable) computers, only one team at one company needs to be able to design to that paradigm. Efficieny is everything in engineering, and hard wired designs win that competition.

-- Carl (Sorry for going on and on, but computers are cool.)
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext