NASA used to use the OS9 real time system on their control computers for launch command control and engineering system control. It was a fault tolerant real time multi tasking OS. It would operate on small and large computers. It would even operate on the CoCo or Tandy Colour computer which used the 6809E chip. That was because the 6809e was built with process control functions that the OS9 system used for task switching. These functions at the time were not implemented well in I86. I86 was superior in string processing and IO, which was a consideration that led IBM to it for the business environment. It was very much poorer at system control and task monitoring. For this reason the MAC and Amiga OS's could boast very much faster multi tasking at a systems level.
The real consideration in any military system is fault tolerance and recovery based on real time monitored situations. For this reason the CDN military used the Tandem computers because they had system and code process duplication and emphasized fall back and non stop operation. NT is based on co-operative code that tried to optimize a non real time environment. It's whole philosophy is to handle processes from an abstraction that is based on the user's interaction with code and the code's handling of data. The overall handling of the task load seems to have been left out of the equation to a degree. It would appear too that the chief faults where systems lock up is memory conflicts/overwrites, bounds overruns and interrupt conflicts. As longs as code can demand interrupts and put registers back dissaray and write to memory at will there will be flubs. Many badly behaved pices of code in windows used to grab all interrupts and never relinquish them.
The fundamental flaw with the PC is badly laid out memory architecture and a paucity of interrupts and bad interrupt behaviour. A multi tasking system does not want to service interrupts except when some time dependent task demands it ie buffers in IO. There should be a memory monitor that can recover from overwrites due to overflows of buffers and faulty inputs. In other words code should have bounds to memory built in. all variable should be checkable for reasonable input/output conditions. What a military system needs is a silicon disk cache that is controllable from the OS where memory demands can be swapped out and sudden growth of memory writes, which should all be relocatable, can be invesigated or alarmed. That way the ship filling up with 20 teralitres of water per second can be checked on or 100,000 aircraft appearing on the horizon at once can be investigated. We need some fuzzy logic in the OS at some point at least to sound alarms about the data getting out of hand.
Eventually and not too soon or too late, a better bus archicture that is suited to IO and networking and multi tasking should be built. The PC is not it. The PCI bus was start.But it fell short and should not have been married to the ISA bus. Some marriage of the best parts of the 68000 series controllers and the Intel string and disk IO with a bus or busses that can achieve true simultaneity would be desirable. It would not be too hard to build a bus for IO, a bus for memory, for display and a bus for storage with separate processors controlling each function. At one point a program has to use each different bus, but data could be loaded into a central marshalling bus where access could be from separate registers from all busses at the same time. A flat memory model should be used with no limitation on protected meory where system programs could reside. Overflow of buffers could be handled by a buffer control chip which would swap to memory out asynchronously.
Or just give me 100 interrupts and 20 comm ports
EC<:-} |