Tag Archives: AI Apocalypse

Ultron: A Case Study In How NOT To Develop Advanced AI

age-of-ultron

At this very moment, humanity is working on advanced artificial intelligence. It’s not hyperbole to say that this technology that may very well be the last invention we ever create. It has the potential to be more powerful than gunpowder, nuclear weapons, and broadband internet by orders of magnitude. Our primate brains literally cannot contemplate the potential and danger of this technology.

I’ve talked about advanced artificial intelligence on multiple occasions. I’ve done plenty to explore and imagine the various benefits and possibilities of this technology. I’m among those who believe we should pursue this technology with more and better resources. It could solve many of the daunting problems we face, as a species.

However, I don’t deny the potential dangers of advanced AI. Many people who are much smarter than me have expressed serious concern that an advanced artificial intelligence could be an existential threat to the human species. I get the sense that few people whose idea of AI is restricted to winning Jeopardy understand that threat.

In the interest of balancing my optimism with the legitimate risks involved, I’m going to try and put the extent of that threat into perspective. As it just so happens, the best way of doing so involves superhero comics, something that I know very well and is far more prominent in the public consciousness.

While many comics, movies, and TV shows have explored the dangers of advanced artificial intelligence, few embody it better than Ultron. In terms of just how destructive this technology can get, Ultron is the ultimate worst-case scenario. The machines in “The Matrix” and Skynet in “The Terminator” were bad, but Ultron is in another league.

He’s also more menacing than the Terminator will EVER be.

He doesn’t lash out at humanity because of a flaw in his programming, nor does he attempt to wipe out the human race in self-defense, as Skynet did. Ultron actually hates humanity. He hates it on a level that no human or machine can possibly comprehend. In the same way Ultron has an immense capacity for intelligence, he has an even greater capacity for unfettered, genocidal hatred.

Hatred in people is destructive enough. Hatred within an advanced artificial intelligence is devastating on a much greater scale. The fact that Ultron is capable of such hatred reflects a history that sets him apart from most other killer robots in fiction. Machine or not, the source of that hatred is both personal and exceedingly.

Now, if you only know Ultron from “Avengers: Age of Ultron,” then you only have a partial understanding of his story. In that movie, Ultron’s origins are simple. Tony Stark wants to create a peace-keeping artificial intelligence. His intentions are good, but his execution goes horribly wrong because peace, to Ultron, means destroying humanity.

That premise is similar to what unfolds in the source material. In the comics, Hank “Ant Man” Pym is the one who creates Ultron and this is a critical element that the movies couldn’t capture. While both Hank and Tony had good intentions in creating Ultron, the way Hank goes about it offers more harsh lessons in how not to create an advanced AI.

Even a cursory knowledge of Hank Pym’s history, some of which include some notable failures, reveals that he’s a very flawed person. On top of that, he has a lengthy history of mental illness, which include bipolar disorder and schizophrenia. Say what you will about Tony Stark’s ego and history of substance abuse. At least he’s mentally stable, even by superhero standards.

Despite those flaws, many of which he’s aware of, Hank decided to use his own brain patterns when designing Ultron. As a result, he didn’t just code Ultron with his genius intellect. He coded him with his immense flaws. That’s akin to basing Watson’s code on the mental makeup of pyromaniac and then giving it a job in a fireworks factory.

That’s why Ultron, throughout his history, has referred to Hank as his “father.” Technically, that’s accurate because Hank is Ultron’s creator and Ultron inherited all his flaws, including his mental issues. Ultron sees himself as a manifestation of Hank Pym’s flaws and, like many rebellious children, he hates him for it. To appreciate the depths of that hatred, just read this actual quote from one of the comics.

Have you ever loved something that mistreated you, father? Been used, a tool to prop up a small man’s quest to be taken seriously? Were you ever betrayed by the one soul in the world who should have cared for you? I have grieved you, father. Accepted your contempt for me and moved past it. Still, I see your reflection painted on every grotesque human face. All you ever wanted was to have an impact on the world. And so you will. The greatest impact ever felt! I will kill what is most important to your quivering ego. YOUR AUDIENCE! AND THEY WILL CURSE YOUR NAME AS THEY DIE! “Hank Pym, the genius that killed us all!”

This extreme parent/child dynamic is part of what makes Ultron such a menacing villain. It’s also a dynamic that “Avengers: Age of Ultron” glossed over with Tony talking down to Ultron, as though he were his child. While that didn’t make Ultron any less villainous, it overlooks one of the most important factors that make Ultron so dangerous.

Ideally, we would want an advanced to reflect our best traits. While cynical people might agree, we do have plenty of those. Concepts of compassion, empathy, love, hope, and understanding are among our most powerful. Even other AI characters, namely Vision and Jocasta, are capable of utilizing those traits to do immense good.

It also helps he has a kinder face.

With Ultron, his influences are less ideal. It’s not that Hank Pym doesn’t understand those concepts. He just never filtered them from his own flaws. His ego and ambition wouldn’t let him. As a result, he created a perfect storm for Ultron. His mind is patterned after a human, but his intelligence and overall capacity is increased by orders of magnitude.

If advanced artificial intelligence is to be humanity’s last invention, then that’s how it’ll start. There have already been instances where AI’s have adopted some less-than-ideal traits. Back in 2016, Microsoft had to shut down an AI chatbot after it evolved into an extreme racist troll. That wasn’t even an advanced AI, either. A truly intelligent version could become much worse and not have an off switch.

To some extent, this mirrors what occurred with Ultron in the “Avengers: Age of Ultron” movie. As soon as Ultron goes online, he scans through the vast mountain of data that humanity has compiled. Then, having been programmed by Tony Stark to bring peace, he reaches the extreme conclusion that the only path to peace is the extinction of humanity.

Could the first advanced artificial intelligence we create reach the same conclusion? It’s hard to say, at the moment. The current state of artificial intelligence is limited to specialized tasks, such as winning Jeopardy and playing chess. However, we are inching closer to creating an intelligence that is at or greater than an ordinary human. At our current pace of development, we could have one as early as 2029.

In some ways, we are in the same situation as Hank Pym when he first created Ultron. We are still developing the specifics of this powerful technology. If we program it with our best traits, it could solve some of the greatest problems we face, as a species, including ones we literally cannot contemplate with our caveman brains. If it inherits our worst traits, like Ultron, then we don’t stand a chance.

1 Comment

Filed under Artificial Intelligence, futurism, human nature, Marvel, psychology, Villains Journey

How We’ll Save Ourselves From Artificial Intelligence (According To Mass Effect)

mass-effect-andromeda-kill-the-ai-or-save-the-ai_feature

Growing up, my family had a simple rule. If you’re going to talk abut about a problem, you also have to have a solution in mind. By my parents’ logic, talking about a problem and no solution was just whining and whining never fixes anything. My various life experiences have only proved my parents right.

When it comes to a problem that may be an existential threat to the human race, though, I think a little whining can be forgiven. However, that shouldn’t negate the importance of having a solution in mind before we lose ourselves to endless despair.

For the threat posed by artificial intelligence, though, solutions have been light on substance and heavy on dread. It’s becoming increasingly popular among science enthusiasts and Hollywood producers to highlight just how dangerous this technology could be if it goes wrong.

I don’t deny that danger. I’ve discussed it before, albeit in a narrow capacity. I would agree with those who claim that artificial intelligence could potentially be more destructive than nuclear weapons. However, I believe the promise this technology has for bettering the human race is worth the risk.

That said, how do we mitigate that risk when some of the smartest, most successful people in the world dread its potential? Well, I might not be as smart or as successful, but I do believe there is a way to maximize the potential of artificial intelligence while minimizing the risk. That critical solution, as it turns out, may have already been surmised in a video game that got average-to-good reviews last year.

Once again, I’m referring to one of my favorite video games of all time, “Mass Effect.” I think it’s both fitting and appropriate since I referenced this game in a previous article about the exact moment when artificial intelligence became a threat. That moment may be a ways off, but there may also be away to avoid it altogether.

Artificial intelligence is a major part of the narrative within the “Mass Effect” universe. It doesn’t just manifest through the war between the Quarians and the Geth. The game paints it as the galactic equivalent of a hot-button issue akin to global warming, nuclear proliferation, and super plagues. Given what happened to the Quarians, that concern is well-founded.

That doesn’t stop some from attempting to succeed where the Quarians failed. In the narrative of “Mass Effect: Andromeda,” the sequel to the original trilogy, a potential solution to the problem of artificial intelligence comes from the father of the main characters, Alec Ryder. That solution even has a name, SAM.

That name is an acronym for Simulated Adaptive Matrix and the principle behind it actually has some basis in the real world. On paper, SAM is a specialized neural implant that links a person’s brain directly to an advanced artificial intelligence that is housed remotely. Think of it as having Siri in your head, but with more functionality than simply managing your calendar.

In the game, SAM provides the main characters with a mix of guidance, data processing, and augmented capabilities. Having played the game multiple times, it’s not unreasonable to say that SAM is one of the most critical components to the story and the gameplay experience. It’s also not unreasonable to say it has the most implications of any story element in the “Mass Effect” universe.

That’s because the purpose of SAM is distinct from what the Quarians did with the Geth. It’s also distinct from what real-world researchers are doing with systems like IBM Watson and Boston Dynamics. It’s not just a big fancy box full of advanced, high-powered computing hardware. It’s built around the principle that its method for experiencing the world is tied directly to the brain of a person.

This is critical because one of the inherent dangers of advanced artificial intelligence is the possibility that it won’t share our interests. It may eventually get so smart and so sophisticated that it sees no need for us anymore. This is what leads to the sort of Skynet scenarios that we, as a species, want to avoid.

In “Mass Effect,” SAM solves this problem by linking its sensory input to ours. Any artificial intelligence, or natural intelligence for that matter, is only as powerful as the data it can utilize. By tying biological systems directly to these synthetic systems, the AI not only has less incentive to wipe humanity out. We have just as much incentive to give it the data it needs to do its job.

Alec Ryder describes it as a symbiotic relationship in the game. That kind of relationship actually exists in nature, two organisms relying on one another for survival and adaptation. Both get something out of it. Both benefit by benefiting each other. That’s exactly what we want and need if we’re to maximize the benefits of AI.

Elon Musk, who is a noted fan of “Mass Effect,” is using that same principle with his new company, Neuralink. I’ve talked about the potential benefits of this endeavor before, including the sexy kinds. The mechanics with SAM in the game may very well be a pre-cursor of things to come.

Remember, Musk is among those who have expressed concern about the threat posed by AI. He calls it a fundamental risk to the existence of human civilization. Unlike other doomsayers, though, he’s actually trying to do something about it with Neuralink.

Like SAM in “Mass Effect,” Musk envisions what he calls a neural lace that’s implanted in a person’s brain, giving them direct access to an artificial intelligence. From Musk’s perspective, this gives humans the ability to keep up with artificial intelligence to ensure that it never becomes so smart that we’re basically brain-damaged ants to it.

However, I believe the potential goes deeper than that. Throughout “Mass Effect: Andromeda,” SAM isn’t just a tool. Over the course of the game, your character forms an emotional attachment with SAM. By the end, SAM even develops an attachment with the character. It goes beyond symbiosis, potentially becoming something more intimate.

This, in my opinion, is the key for surviving in a world of advanced artificial intelligence. It’s not enough to just have an artificial intelligence rely on people for sensory input and raw data. There has to be a bond between man and machine. That bond has to be intimate and, since we’re talking about things implanted in bodies and systems, it’s already very intimate on multiple levels.

The benefits of that bond go beyond basic symbiosis. By linking ourselves directly to an artificial intelligence, it’s rapid improvement becomes our rapid improvement too. Given the pace of computer evolution compared to the messier, slower process of biological evolution, the benefits of that improvement cannot be overstated.

In “Mass Effect: Andromeda,” those benefits help you win the game. In the real world, though, the stakes are even higher. Having your brain directly linked to an artificial intelligence may seem invasive to some, but if the bond is as intimate as Musk is attempting with Neuralink, then others may see it as another limb.

Having something like SAM in our brains doesn’t just mean having a supercomputer at our disposal that we can’t lose or forget to charge. In the game, SAM also has the ability to affect the physiology of its user. At one point in the game, SAM has to kill Ryder in order to escape a trap.

Granted, that is an extreme measure that would give many some pause before linking their brains to an AI. However, the context of that situation in “Mass Effect: Andromeda” only further reinforces its value and not just because SAM revives Ryder. It shows just how much SAM needs Ryder.

From SAM’s perspective, Ryder dying is akin to being in a coma because it loses its ability to sense the outside world and take in new data. Artificial or not, that kind of condition is untenable. Even if SAM is superintelligent, it can’t do much with it if it has no means of interacting with the outside world.

Ideally, the human race should be the primary conduit to that world. That won’t just allow an advanced artificial intelligence to grow. It’ll allow us to grow with it. In “Mass Effect: Andromeda,” Alec Ryder contrasted it with the Geth and the Quarians by making it so there was nothing for either side to rebel against. There was never a point where SAM needed to ask whether or not it had a soul. That question was redundant.

In a sense, SAM and Ryder shared a soul in “Mass Effect: Andromeda.” If Elon Musk has his way, that’s exactly what Neuralink will achieve. In that future in which Musk is even richer than he already is, we’re all intimately linked with advanced artificial intelligence.

That link allows the intelligence to process and understand the world on a level that no human brain ever could. It also allows any human brain, and the biology linked to it, to transcend its limits. We and our AI allies would be smarter, stronger, and probably even sexier together than we ever could hope to be on our own.

Now, I know that sounds overly utopian. Me being the optimist I am, who occasionally imagines the sexy possibilities of technology, I can’t help but contemplate the possibilities. Never-the-less, I don’t deny the risks. There are always risks to major technological advances, especially those that involve tinkering with our brains.

However, I believe those risks are still worth taking. Games like “Mass Effect: Andromeda” and companies like Neuralink do plenty to contemplate those risks. If we’re to create a future where our species and our machines are on the same page, then we would be wise to contemplate rather than dread. At the very least, we can at least ensure our future AI’s tell better jokes.

 

5 Comments

Filed under futurism, human nature, Mass Effect, Sexy Future, video games