
Whenever I discuss or follow issues surrounding artificial intelligence, it’s not long before the topic of killer robots come up. That’s to be expected and not just because most of us have seen “Terminator” one too many times. However, it’s no laughing matter.
At this stage in our technological development, killer robots and autonomous weapons aren’t restricted to science fiction. They already exist in many respects. We just call them drones or unmanned combat aerial vehicles. These are real things that operate in real war zones. They have killed people, including innocent civilians.
They may not look like your typical T-101, but make no mistake. They’re every bit as deadly. They don’t need to hide within the body of Arnold Schwarzenegger. They can just flies thousands of feet over your head and kill from a distance. That’s a scary thought for anyone in a war zone, but you can take some comfort in that these machines still have human operators.
That could eventually change. As I’ve noted before, artificial intelligence is advancing to a point where it can match or exceed the skill of an ordinary human in specialized tasks. Even if we’re decades away form an AI that has the same cognitive capacity as an ordinary human, we already have AI systems that specialize in particular tasks, like chess.
It wouldn’t be that great a stretch to develop an advanced artificial intelligence that could specialize in flying combat drones without any human input. In principle, an artificial intelligence wouldn’t be nearly as prone to human error or hesitation if their human pilots were taken out of the equation.
However, that also raises some serious ethical, as well as strategic questions. If humans are out of the loop in operating these autonomous weapons systems, then what happens to how we conduct warfare? What does this mean for both sides of an armed conflict?
Ideally, an advanced AI will be better at making decisions to limit civilian casualties. That is likely the ultimate goal in researching these systems. The problem is we’re still a long way from that goal, so much so that one government in the world is already trying to establish a precedent.
Fittingly, it’s a government from an area that is not in any war zone, nor is it near one. New Zealand, a country not known for frequent warmongering, recently pushed for a worldwide ban on autonomous weapons systems. It’s a bold and ambitious effort, but one I believe we should take seriously.
Stuff: Government to push for international ban of autonomous weapons, or killer robots
The Government will push for an international ban on fully autonomous weapons, or killer robots, that use artificial intelligence to target and kill people without any human decision-making.
New Zealand has for decades advocated for disarmament in international forums, after declaring the country a nuclear-free zone in the 1980s. Autonomous weapons are seen as a new frontier in the arms race between major military powers.
Disarmament Minister Phil Twyford on Tuesday said the Government had decided to take a “tough and uncompromising” stance on autonomous weapons, and seek a ban of fully autonomous weapons on the international stage.
Regardless of how you feel about New Zealand’s policies or intentions, this is one rare instance where politics might have to inject itself into science. Like it or not, the pace of artificial intelligence is accelerating. Few agree on how much time it’ll take to reach human level AI, but nobody denies that it’s an advancing field.
At some point in the very near future, we’ll have AI systems specifically tailored to fly combat drones with the same proficiency as a skilled Air Force pilot. That does not require human level AI. It just requires AI that can handle the various tasks associated with operating these systems.
When that time comes, it will be tempting to take flawed humans out of the loop. That means militaries with these autonomous weapons systems won’t have to be as hesitant when it comes to deploying these systems.
We can safely assume this because there’s a big difference between pushing a button that fires a gun and ordering another human being to fire that same gun. Even if that other human is trained and obedient, they’re still human. They can still make mistakes. They can still hesitate.
For once, that’s not a bad thing. Sometimes, we need humans to hesitate in the midst of war. Were it not for hesitation, the Cuban Missile Crisis could’ve ended very differently.
If waging war is too easy for those with access to these systems, then more war will follow. More war means more death, destruction, and violence. Too much of that and lasting peace becomes much more difficult. The whole of 20th century history is proof enough of that.
Like nuclear weapons, autonomous weapons systems are a Pandora’s Box that cannot be closed once opened. We’ve already opened it partially thanks to drone warfare. For that reason, I think New Zealand has the right idea in terms of mitigating the potential danger.
Even if autonomous weapons systems become so advanced that they operate better than any human, we still need at least one human behind the decision making process. We still need a flesh-and-blood person to pull the trigger rather than entrusting that recourse to a machine.
We, as a species, simply cannot and should not walk this path with our technology. It’s not just about limiting the use of dangerous weapons. Once we start teaching these advanced AI’s to kill, then we won’t be able to unteach them. If they eventually become too good at it, then that does not bode well for humanity as a whole, regardless of which side they’re on in any war.