SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : Formerly About Advanced Micro Devices -- Ignore unavailable to you. Want to Upgrade?


To: Wharf Rat who wrote (884526)9/2/2015 2:59:34 PM
From: one_less  Respond to of 1578498
 
Forget killer robots, we should be worrying about robotic SPIES: US military's top AI expert says protecting privacy is our biggest concern Comment made by Gill Pratt, manager for the Darpa Robotics Challenge ‘How do we protect the information that the robot picks up?’ he asks He claims, today, there is too much trust in the software used in devices But he doesn't believe we should ban development of robotic weapons By Ellie Zolfagharifard For Dailymail.com

Published: 18:04 EST, 1 September 2015 | Updated: 18:04 EST, 1 September 2015

One of the US military’s top scientists claims it isn’t killer robots we need to worry about, but an uprising of robotic spies

Gill Pratt, the program manager for the Darpa Robotics Challenge, recently told Defense One that banning autonomous weapons was wrong.

Our focus should instead be on protecting intelligence, he said.

‘The danger is not in the legs. It’s in the camera and the microphone,’ said Pratt. ‘How do we protect the information that the robot picks up?’







Gill Pratt (right), the program manager for the Darpa Robotics Challenge, recently told Defense One that banning autonomous weapons was wrong. Our focus should instead be on protecting intelligence, he said. Pictured on the left is one of the robots that recently competed in the Darpa Robotics Challenge

In the future, Pratt envisions robots doing everything from helping the elderly at home, carrying our backpacks on a hike and aiding in disaster recovery operations.

‘I’d love to have a machine help me when I grow old, he said, in an in-depth interview with Defence One. ‘But I don’t want all the information, all that the robot is watching, to be made public.

‘How do we protect against that? I don’t know. ‘These are serious questions, but they aren’t specific to the robotics field. They’re specific to IT.’

He claims, today, there is too much trust in the software used in devices such as mobile phones.

His point was proven last year when experts found gyroscopes on mobile can be turned into crude microphones that can pick up on phone conversations with the aid of specialist software.

‘I don’t worry about the robot on the loose doing physical damage.

'The valuable stuff is the data. That issue is huge and transcends whether it’s a robot, a cellphone, or a laptop.’






‘The danger is not in the legs. It’s in the camera and the microphone,’ said Pratt. ‘How do we protect the information that the robot picks up?’ Pictured is a scene from Terminator Genisys




Earlier in the summer, Elon Musk and Stephen Hawking, signed a letter urging governments to ban the development of autonomous weapons.

The letter warned that 'autonomous weapons will become the Kalashnikovs of tomorrow'.

The experts point out that, unlike nuclear weapons, AI weapons require no costly or hard-to-obtain raw materials.

This means they will become ubiquitous and cheap for all significant military powers to mass-produce.

'If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable,'the letter states.

'Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group,' the letter states.

'We therefore believe that a military AI arms race would not be beneficial for humanity.'

But Pratt believes now is the wrong time to be making this decision. He said first we need to understand what’s possible, before deciding to ban them.

‘In the case of lethal autonomy, we need to learn a whole lot more and there’s a whole of good that they can do, too, in stopping lethal errors from happening,’ he added.











Earlier this year, open letter signed by more than 1,000 robotics experts, including Tesla-founder Elon Musk (right) and physicist Stephen Hawking (left), called for an outright ban on 'offensive autonomous weapons beyond meaningful human control' in an effort to prevent a global AI arms race



Read more: http://www.dailymail.co.uk/sciencetech/article-3218997/Forget-killer-robots-worrying-robotic-SPIES-military-s-AI-expert-says-protecting-privacy-biggest-concern.html#ixzz3kbpRS94E
Follow us: @MailOnline on Twitter | DailyMail on Facebook