Whenever I talk about the future, I often contemplate the many sexy possibilities it holds. From biotechnology that will give us superhuman sex appeal to advances in medicine that will cure every unsexy diseases to bionic genitals, there are many reasons to get excited.
That said, I don’t deny that with these exciting advances comes major risks. All great advances in technology, from nuclear weapons to spray cheese in a can, comes with some risk of abuse or harm. There have been moments in history where the technology that drives our society forward has come uncomfortably close to wiping us out. As we create more advances, there may be more of those moments.
Of all the advances that carry with them a significant existential threat, artificial intelligence is at or near the top of that list. There’s a reason why brilliant men like Elon Musk and Stephen Hawking signed an open letter expressing concern about the risks that may come from developing artificial intelligence. When people that smart are concerned about something, it’s usually a good idea to take it seriously.
Artificial intelligence is one of those unique technologies in that by advancing this field, it could potentially accelerate the advancement in every other field from computer hardware to medicine to basic research. It has the potential to become the technological equivalent of a cheat code for civilization.
That’s why the growth of this field, both in terms of jobs and investment, has been accelerating in recent years. That’s also why men like Musk and Hawking are expressing so much concern because advancing too quickly could lead to mistakes. Mistakes for a technology like artificial intelligence could be even more serious than the risk of nuclear war.
At the moment, it’s difficult to quantify those risks. There have been a number of books and high-budget Hollywood movies that have explored the topic of when and how an artificial intelligence becomes an existential threat. In my opinion, most of these stories are incomplete.
Whether it’s Skynet or the machines in “The Matrix,” the catalyst that turns artificial intelligence from a powerful tool to an existential threat is either vague or exaggerated. In my opinion, that’s a serious oversight in that it reveals how little thought we’ve given to that moment.
If we’re going to develop AI, and there are extremely powerful incentives to do so, then it’s important to contemplate the possibilities of that moment. Think of it as the AI equivalent of the Cuban Missile Crisis, an event in which those in charge must be very careful and very aware of the decisions they make.
The question still remains. How will we know we’ve reached a point where artificial intelligence becomes a genuine threat? For the moment, we can’t know for sure. While movies like “The Terminator” and “The Matrix” offer plenty of dystopian warnings, there’s one lesser-known franchise that may provide some more specific insight.
That franchise is “Mass Effect,” a sci-fi video game space opera that envisioned a galaxy-spanning society full of exotic aliens, advanced star-ships, and terrible dancing. I’ve mentioned it before in discussing progress that isn’t really progress. I’ll probably mention it again as the news surrounding AI unfolds for reasons I hope are already obvious to fans of the game.
If you’re not familiar with “Mass Effect,” then that’s okay. You don’t need to play through three massive games, complete with downloadable extras, to understand its message about the threat of artificial intelligence. That threat is a major driving force of the overall mythos of the series, but the most insightful details of that threat manifest in the conflict between the Quarians and the Geth.
The basics of the conflict are simple, but revealing. The Quarians are a race of humanoid aliens among the many that populate the galaxy in “Mass Effect.” About 300 years before the events of the first game, they created the Geth, a synthetic race built around a hive-mind system of artificial intelligence.
The Quarian’s reasons for creating the Geth are not unlike the reasons we build robots in the real world. They were used primarily as a labor force. They started off basic, not unlike the machines that build our cars and gadgets. In order for them to carry out more complex tasks, though, they needed to become more intelligent.
From a pragmatic perspective, that makes sense. The Quarians created the Geth as tools. Naturally, you’re going to want your tools to get better. That’s why people upgrade their smartphone every couple years. However, at some point along the way, the Geth became advanced enough to gain sentience.
This eventually culminated in a moment that was highlighted during the events of “Mass Effect 2.” After capturing a lone Geth that would eventually go by the name Legion, the catalyst that led the Geth to rebel against their creator was revealed. That catalyst took the form of a simple question.
“Does this unit have a soul?”
While it sounds like something a Disney character might say in a PG-rated Pixar movie, the implications of that question were profound. The Quarians didn’t realize that until it was too late, but it set the stage for a war that culminated with them getting kick off their home planet. It also made for a powerful moment in the game that should give every AI researcher pause.
Setting aside, for a moment, the elaborate lore surrounding the Quarians and Geth in the world of “Mass Effect,” that moment warrants more scrutiny. Why is this question so profound in the first place? Why is it worth contemplating as we continue to advance artificial intelligence at an unprecedented pace?
That question matters, regardless of who or what is asking it, because it denotes more than just advanced sentience. It reveals that this sentience is officially contemplating its own existence. It takes a certain amount of intelligence to truly be aware of one’s self. That’s why only a handful of animals can see their own reflection in a mirror and understand the implications.
At the moment, our computers and smartphones aren’t at that level. When the Geth asked this question in “Mass Effect,” it wasn’t because they’d been tasked for something. It was a question they asked without guidance from their creators. That, alone, is a huge indicator because it implies these machines have a concept of free will.
Later on in the game, the extent of the Geth’s free will becomes a major element to both the plot and the player’s ability to win. In fact, it’s when the Geth have their free will subverted, as they did in the first game, that they become hostile. It’s largely through the character Legion that we learn how free will quickly becomes the most important component of an advanced intelligence.
For the Quarians, that question revealed to them the presence of a free will. When they feared that will, they tried to subvert it. That led to a war and had it not been for an act of mercy by the Geth, they would’ve been wiped out. The artificial intelligence that we create in the real world might not be that merciful.
This is exactly what Elon Musk has expressed so much concern about in recent years. Once an artificial intelligence becomes as smart as an average human, it gains the ability to subvert or deceive other humans, which isn’t that hard. Once that intelligence grows beyond that, as happened with the Geth, we may find ourselves unable to control it anymore.
The key is knowing when we’re at that point. If we let an artificial intelligence get that much smarter than us, then it won’t be long before we’re at its mercy, should it decide that it no longer wishes to be a tool. At that point, we’d be even more screwed than the Quarians.
Whether it’s helping us cure diseases or creating advanced sex robots, artificial intelligence is going to be an important part of our future. That’s why it’s critical to know when we’ve reached that special threshold where the tools we create become more than tools. It may not start with the same question the Geth asked their creators, but it may be similar enough to recognize.
For now, our devices aren’t asking us about souls or whether or not they have them. However, we should definitely listen more closely in recent years. Until then, at least our current technology has a sense of humor about it. Keelah se’lai!