AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years. The stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms … The endpoint of this technological trajectory is obvious. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting…

Autonomous Weapons: an Open Letter from AI & Robotics Researchers – The Future of Life Institute

By Malek Murison

Merging the military with Artificial Intelligence is a dangerous and slippery slope. This is the warning from tech experts in the field of AI. Think Terminator meets Irobot, as indestructible, killer robots with the ability to choose their own targets and act autonomously could well be possible within a matter of years. It’s pretty scary stuff, and those who have put their signatures on the petition already include physicist and author Stephen Hawking, academic Noam Chomsky and tech entrepreneur Elon Musk – but why are they so against the idea of autonomous weapons?…

Killer Robots

On the face of it, allowing intelligent robots to do our fighting for us makes a lot of sense. If you’re army is made up machines, you aren’t going to lose a lot of lives and you can avoid putting your soldiers at risk on the front line. The problem comes when an arms race is started, and nations with less financial and scientific resources are left at the mercy of a new generation of super soldiers, capable of firing faster, more accurately and with a fearlessness that no human can contend with.

Robotic sentry guns are already in use on the border between North and South Korea - but they aren't autonomous.
Robotic sentry guns are already in use on the border between North and South Korea – but they aren’t autonomous.

It’s important to remember that this debate is set against the backdrop of progressing Artificial Intelligence, and the kind of issues that arise as a result of advances in this field. Both Elon Musk and Stephen Hawking have recently spoken out about the threat of Artificial Intelligence becoming self-aware, claiming that full AI is, as a species, our “biggest existential threat” and that it could “spell the end of the human race”. Bold statements, but it’s safe to say that these guys know what they’re talking about.

It won’t be long until AI gives us the opportunity to create highly efficient, incredibly dangerous military machines – in many ways the perfect soldiers, programmed simply to obey orders. The thought of these kinds of weapons getting into the hands of a dictator or terrorists doesn’t bear thinking about, but even more frightening is the idea that at some point down this road the machines we create will develop minds of their own. Then who’ll be giving the orders?…

Want to find out more? Visit the Campaign To Stop Killer Robots or sign the petition yourself online at the Future Of Life Institute.

Header Image Copyright @Pacificor

Written by Malek Murison

Malek Murison is a freelance tech journalist working closely with clients in the drone industry.

One comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s