SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : View from the Center and Left -- Ignore unavailable to you. Want to Upgrade?


To: Mary Cluney who wrote (126899)12/10/2009 8:27:30 PM
From: cosmicforce  Read Replies (1) | Respond to of 540925
 
Even if there are only technical hurdles, it is the major accountability, ethical and philosophical issues regarding turning over final kill orders to war machinery that trouble me most. One course many engineers could be well served by is a program in Ethics. Fortunately, some of us have studied this topic.

The rover missions are way less automatic than some people may suppose. Humans plan all courses for the adaptive robot, which can executed with some level of emergency control intelligence previously developed and tested back at Earth. Options are given for failure modes, but are limited. The rovers are never "winging it". If there is confusion, try to get safe and then call home. The rest is just running through a play book.

Balancing the issues of international tension OTOH are at least a million times more complex than exploring a planet totally automatically with no feedback from Earth (i.e., "Just go there, drive around, send us what is interesting. End of orders." IMO, amoral stability of world tension is a million million (10^12) times what we are doing with rovers. Ethical stability is a further million times more complex.

I appreciate the science fiction angle of where your thoughts lie but as an amateur with some knowledge of AI and ethics, but continue to believe this is not a rich area for human technological exploration. JMHO.