SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : All About Sun Microsystems -- Ignore unavailable to you. Want to Upgrade?


To: Robert who wrote (10498)7/17/1998 1:27:00 PM
From: Michael L. Voorhees  Read Replies (1) | Respond to of 64865
 
Robert: JINI seems to fit the bill in that code can be shipped transparently over the InterNet to machines as opposed to MPI. I stated "future" as I assume that MPI could very well be part of future APIs. As far as storage requirements, they are not the main problem, I have a "granularized" version of GMRES which I have developed that can actually segregate storage (if necessary). My main need is MFLOPS so MPI fits the bill but JINI (in the future) would fit it better with the proper API as I would not have to get MPI software and applications running on all machines. This I believe to be a promising use of JINI, i.e. transparent implementation of MPI bandwidth without MPI implementation explicitly being implemented on InterNet networked machines. The potential for scientific consultants is immense, i.e. they can technically compete with the computing horsepower that are in existence at even the largest institutions. In fact, if properly implemented JINI scientific computing could make most of these facilities obsolete as sharing or computing resources would be seamlessly integrated to even the smallest computing devices (including the storage of matrices which are usually generated by the application and are not user input and could thus be stored by larger machines on the network).