To: John F. Dowd who wrote (4808 ) 12/27/1997 From: Kashish King Read Replies (1) | Respond to of 19080
Since the objects are distributed the functionality of the system seems heavily dependent on the network working 100% what are the redundancy requirements for such a system? When you are running an application you are running it on your local machine. That application is going to expect that whatever services it needs will not be available from time to time. So in that sense, they can operate independently. Moreover, there is no reason to load components or applications unless and until the version you have changes. There is no reason why Java applications cannot be permanently stored on a small, rugged local hard-disk and that's the cheapest way to cache commonly used components anyway. In other words, centralized installation and management without any performance penalty. Keep in mind, you have a flow of objects and information from around the network but that software design does not preclude caching. Also note that there's nothing stopping anybody from compiling Java into native code for a particular hardware platform. It's what I call WAT for Way Ahead of Time compilation. WAT is what we've been doing for the last 20 years or more with C and other compiled languages. You still write once and you still run anywhere; it's just that you've more or less cached the whole enchilada way ahead of time. Having said all that, the reliability of your network connection is second only to the availability of power and, like power, any viable network will have built-in, cost-effective and easy-to-manage fail-over systems. It suprising how much attention is being paid to this by Cisco, Novell, CA, Microsoft, Sun, IBM and the many hundreds of smaller companies involved in network computing -- what we really mean when we say NC: network computing.