It’s been a big day for… Listening to...

0:00 10:23

It’s been a big day for… Listening to...

Even Ex-Google Employees Are Validating Your Fears About An AI Killer Robot Apocalypse

*throws my roomba into the ocean*

You don’t need to be an all-out technophobe to be more than a little wary of artificial intelligence. It’s no longer the realm of science fiction – the threat of warmongering killer robots was enough to make at least one Google engineer quit her job for ethical reasons.

There’s only one thing worse than a smart robot, and that’s a smart robot with weapons. Uncanny valley bots like Sophia might be nightmarish, but they’re not dangerous (allegedly). Most of the recognisable AIs are little more than a marketing tactic to sell tech that’s far more useful in an everyday sense.

Defence forces, however, have a very different motivation for employing AI technology, and it’s definitely one the darker side of ‘morally grey’. Google was collaborating with the Pentagon to use artificial intelligence to analyse drone footage, but dropped the contract after workers protested.

Despite that project fizzling out, there are still plenty of countries that use autonomous weapons in their military, including the USA, Russia and Israel. No, we’re not talking about Terminator-style cyborgs stomping around. These are weaponised vehicles mainly – tanks and submarines that are unmanned and still capable of blowing stuff up.

War is, in general, pretty bad. Hot take, I know. But the real problem is that machines are not perfect, and killer robots are no exception. Letting them figure out who to fire at, without a human there to override it, is a surefire way to commit massive violations of the Geneva Convention.

That’s not because the robots are going to start feeling a tad genocidal, but because glitches happen even within the most advanced systems.

Scientists can tell me all they like that a killer robot uprising is impossible, but I will continue avoid smart home devices like the plague. Besides, there isn’t really a consensus on the risk – even Elon Musk and Stephen Hawking have expressed suspicion of the superintelligence.

If we can just find a way to stop robots from inheriting all of humanities nastiest qualities we should be fine. Easy, right?