SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Intel Corporation (INTC) -- Ignore unavailable to you. Want to Upgrade?


To: bhagavathi who wrote (99630)2/23/2000 10:36:00 AM
From: rudedog  Respond to of 186894
 
bhagavathi -
Comprehensive treatment of this subject is probably way too technical and boring for most of the participants on this thread, and probably OT as well - let me take a quick swag at a high level pass and see if that is what you are looking for.

The basic "unit" of independent processing, in Unix, VMS, NT and most other modern operating systems, is the "process" or task. A process is a complete context which has a "virtual machine" including I/O, memory resources, stack space. This is the basis of pre-emptive multi-tasking and presents a simple programming model - the programmer does not have to give any consideration to what else might be running, resource sharing, or access to I/O, since that is all handled by the OS. However, a process is also a pretty big chunk of stuff, and when the OS needs to switch to another task, all references to the currently running process must be stored so that it can be re-started at some future point. This results in substantial disk I/O, clears out the instruction and data cache, and almost certainly also flushes the L2 cache as well.

Symmetric Multiprocessing (SMP) and the increasing speed differences between the processor itself and the memory and I/O subsystems created the need for a lighter-weight block - systems were spending more time doing context switching than actual productive work. The first cut at this was the "thread" - threads are children of processes, and have access only to resources owned by the process, but a thread can be suspended with a much smaller amount of effort since most of the context is held at the process level.

Thread capability also made possible a number of dramatic improvements in applications like database managers. Sybase was the first to make broad use of this capability - it consisted of essentially 2 processes, a SQL processor and an I/O processor. Each SQL query created a thread which existed only until the query was processed. Data was always "in memory" - if it was not, the thread was suspended and a corresponding thread in the I/O processor satisfied the request, then re-started the query thread.

This was so much more efficient that initially Sybase had a huge efficiency advantage over process-based managers like Oracle, and all of the major database applications quickly shifted to a thread model.

But even threads were too "heavy weight" for highly replicated applications such as mail systems, and an even smaller processing unit - the "fiber" came into use. A fiber can be arbitrarily defined to contain as small a context as needed, sometimes as little as a few registers. Fibers can be useful in programming against speculative execution as well as for simple but broad pre-emptive multitasking.

There are components to support all three constructs in Solaris, NT and Linux - the differences are in the implementations, and in the amount of overhead associated with each unit. These overheads are also architecture and processor dependent. A processor architecture which spends silicon to track and map speculative execution reduces the need for fiber programming, for example. Also, a design which assumes that a certain number of fiber contexts will execute out of either I&D cache or L2 may work great with a 2 meg cache but thrash on a 512 K cache...

Hope this helps.