There is considerable doubt. Now, it could happen. But it probably won't. To solve the problem you are talking about is what they call "strong AI". That is a very long term goal currently.
Some at DOD in this field anticipate significantly autonomous robot "soldiers" by 2025. The longest estimate I've seen is by 2035.
However, AI isn't necessary for autonomy. I'll agree that AI in the sense of an animal such as a cat could be very far away. But expert systems are a different matter. The question, of course, is whether the capabilities of expert systems will be sufficient to allow a robot to open fire on a target on its own accord.
And the reality is we've been doing that for years. The AEGIS missile system that took out the Iranian jetliner in '88 had at least four different modes of "fail-safe" allowing for sailors to override its "judgment". Yet, when the system ID'ed the jetliner as an Iranian fighter, nobody was willing to shut it down in the face of the hard data that the system provided.
Effectively, the AEGIS system had "autonomy" in '88. Sure, someone could have pushed a button and stopped it, but they didn't. There are plenty of other examples as well (Patriots shooting down friendlies in the Gulf War only a few years later come to mind).
Strong AI isn't required for this functionality. Expert Systems, probably.
Actually, bentway's mention of fusion is very apt.
Again, there are examples on both sides. Fusion is a much different kind of problem.
I would say a better example might be the idea of computers winning at chess. First, we were told that computers couldn't beat a person at chess. They did. Then we were told they couldn't beat an expert at chess. They did. Then we were told they couldn't beat a Grand Master, and Deep Blue did just that in '97.
Nothing can come even close to passing a Turing test.
Actually, there are those who believe passing a Turing test is on the near horizon. But the reality is that with the prize being a mere $100K, nobody really cares anymore. The time is far better spent doing something more productive. Kurzweil has predicted by 2030.
However, the Turing test is far more difficult than the challenge of basic, significant battlefield autonomy. That does not mean robots walking around gunning people down, but may involve the ability to follow a predetermined route, act on certain conditions and require human intervention before firing on purported enemy.
There is no yes or no here, which is the reason I referred to "significant" autonomy and don't believe that strong AI is a prerequisite to such autonomy.
The use of unmanned ground vehicles will grow substantially over the next six years to a point that literally half the vehicles in the military inventory are unmanned. At least that is the "FCS" plan at this time. These will involve many different kinds of robots. And there will be both surface and submarine seagoing robots as well in that time frame.
I don't really get the comparison with Fusion. Military analysts who deal with robotics believe we're on the precipice of a Singularity in this field, and it makes so much sense you can't help but think it is likely the case. |