SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices -- Ignore unavailable to you. Want to Upgrade?


To: Gopher Broke who wrote (59302)5/22/1999 7:07:00 PM
From: Scumbria  Read Replies (2) | Respond to of 1572942
 
Gopher,

Compiler people have been investigating how they could intermix different threads within a single instruction stream for years. The advantage of doing this is that there are guaranteed to be no hazards between threads, allowing much greater utilization of the CPU hardware. Wide superscalar designs have been very inefficient because practically every other instruction has a hazard with a previous instruction.

VLIW architectures give the compiler more direct access to the CPU hardware, which allows them to do very fine grained multithreading for highly redundant and predictible tasks, such as those done by servers.

I'm not referring to the granularity of threading which software people are used to, but a much more subtle level. For example, suppose you were going to execute two identical "for loops", pointing to different data structures. It would be possible to interleave them and double your CPU utilization.

Scumbria