Tag Archives: existential threat

How Do We Regulate Artificial Intelligence? Seriously, How?

In general, I don’t pay much attention to doomsayers who claim the end is near and we should all cower, tremble, and give them our credit card number. Don’t get me wrong. I still believe there are serious existential threats facing humanity today. Some are worth taking seriously and dedicating resources to addressing. Others are not. Some simply require a more balanced perspective.

There’s a long list of failed apocalyptic predictions. The fact we’re surviving and thriving by most measures shows just how resilient, adaptable, and capable humanity is. There are some threats that I believe humanity will eventually resolve, thanks largely to our accelerating progress in science, technology, and logistics.

Others, however, have me more concerned. While some are more immediate than others, one in particular continues to confound me, as well as some of the smartest people in the world. It involves artificial intelligence, an emerging technology that’s as promising as it is unpredictable. Given the complexity of this technology, it’s difficult to explain in totality, but it can be best summed up by one simple question.

How do you regulate artificial intelligence?

That’s not a rhetorical question. It’s not a thought experiment either. It’s a serious, honest question that people far smarter and far more capable than me are just starting to ask.

Elon Musk is one of them. Very recently, he called for more regulation on artificial intelligence. That, alone, should be both telling and worrying. This man is a billionaire. Usually, billionaires are more inclined advocate removing regulations. Whenever they make an exception, that’s a sign they know it’s serious.

Even though Musk is one of the top advocates for solving big problems with technology, he still has concerns about the problems associated with artificial intelligence. In AI circles, it’s often called the control problem. It’s not a very creative name, but it gets the point across.

How do you control something that is potentially as smart, if not smarter than a human?

How do you manage something that thinks, adapts, and evolves faster than any machine or living thing?

How do you regulate an artificial intelligence that was built by humans, but isn’t at all human?

These are all difficult questions to contemplate, let alone legislate. Even Musk doesn’t provide specifics. Chances are he doesn’t know any more than the rest of the non-billionaire population. That’s a problem because if we’re going to try and regulate this technology, we need to understand it. On top of that, politicians and lawmakers have a long and embarrassing history of failing to understand technology.

However, this isn’t just about writing laws that protect citizens from being exploited by tech companies. Artificial intelligence, especially the kind that exceeds human intelligence, has capabilities that go beyond sending text messages from bathroom stalls. If handled improperly, it wouldn’t just be an existential threat. It could destroy humanity in ways we literally cannot contemplate.

Now, I try to be an optimist in most things involving emerging technology. Humanity has found a way to manage dangerous technology before, namely with nuclear weapons. However, artificial intelligence is a different beast entirely. Regulating it isn’t as easy as simply controlling the materials that make it. The very concept of regulating this technology lacks precedent.

The closest we have to date is Isaac Asimov’s famous three laws of robotics, which were introduced in 1942. Asimov was a brilliant writer and very ahead of his time on some concepts, but this is one issue where we need more than just three simple tenants. We need to think bigger and bolder. If we don’t, then an advanced artificial intelligence will quickly leave us behind.

After that, it won’t matter what kind of regulations we try to pass. It’ll be smart enough to circumvent them. That doesn’t mean humanity is doomed at that point, but we’ll be distressingly vulnerable. I know it’s in our nature to procrastinate on things we don’t see as vital, but if ever there was an issue to make an exception, this is it.

1 Comment

Filed under Artificial Intelligence, biotechnology, futurism, philosophy, technology, Thought Experiment

Ultron: A Case Study In How NOT To Develop Advanced AI

age-of-ultron

At this very moment, humanity is working on advanced artificial intelligence. It’s not hyperbole to say that this technology that may very well be the last invention we ever create. It has the potential to be more powerful than gunpowder, nuclear weapons, and broadband internet by orders of magnitude. Our primate brains literally cannot contemplate the potential and danger of this technology.

I’ve talked about advanced artificial intelligence on multiple occasions. I’ve done plenty to explore and imagine the various benefits and possibilities of this technology. I’m among those who believe we should pursue this technology with more and better resources. It could solve many of the daunting problems we face, as a species.

However, I don’t deny the potential dangers of advanced AI. Many people who are much smarter than me have expressed serious concern that an advanced artificial intelligence could be an existential threat to the human species. I get the sense that few people whose idea of AI is restricted to winning Jeopardy understand that threat.

In the interest of balancing my optimism with the legitimate risks involved, I’m going to try and put the extent of that threat into perspective. As it just so happens, the best way of doing so involves superhero comics, something that I know very well and is far more prominent in the public consciousness.

While many comics, movies, and TV shows have explored the dangers of advanced artificial intelligence, few embody it better than Ultron. In terms of just how destructive this technology can get, Ultron is the ultimate worst-case scenario. The machines in “The Matrix” and Skynet in “The Terminator” were bad, but Ultron is in another league.

He’s also more menacing than the Terminator will EVER be.

He doesn’t lash out at humanity because of a flaw in his programming, nor does he attempt to wipe out the human race in self-defense, as Skynet did. Ultron actually hates humanity. He hates it on a level that no human or machine can possibly comprehend. In the same way Ultron has an immense capacity for intelligence, he has an even greater capacity for unfettered, genocidal hatred.

Hatred in people is destructive enough. Hatred within an advanced artificial intelligence is devastating on a much greater scale. The fact that Ultron is capable of such hatred reflects a history that sets him apart from most other killer robots in fiction. Machine or not, the source of that hatred is both personal and exceedingly.

Now, if you only know Ultron from “Avengers: Age of Ultron,” then you only have a partial understanding of his story. In that movie, Ultron’s origins are simple. Tony Stark wants to create a peace-keeping artificial intelligence. His intentions are good, but his execution goes horribly wrong because peace, to Ultron, means destroying humanity.

That premise is similar to what unfolds in the source material. In the comics, Hank “Ant Man” Pym is the one who creates Ultron and this is a critical element that the movies couldn’t capture. While both Hank and Tony had good intentions in creating Ultron, the way Hank goes about it offers more harsh lessons in how not to create an advanced AI.

Even a cursory knowledge of Hank Pym’s history, some of which include some notable failures, reveals that he’s a very flawed person. On top of that, he has a lengthy history of mental illness, which include bipolar disorder and schizophrenia. Say what you will about Tony Stark’s ego and history of substance abuse. At least he’s mentally stable, even by superhero standards.

Despite those flaws, many of which he’s aware of, Hank decided to use his own brain patterns when designing Ultron. As a result, he didn’t just code Ultron with his genius intellect. He coded him with his immense flaws. That’s akin to basing Watson’s code on the mental makeup of pyromaniac and then giving it a job in a fireworks factory.

That’s why Ultron, throughout his history, has referred to Hank as his “father.” Technically, that’s accurate because Hank is Ultron’s creator and Ultron inherited all his flaws, including his mental issues. Ultron sees himself as a manifestation of Hank Pym’s flaws and, like many rebellious children, he hates him for it. To appreciate the depths of that hatred, just read this actual quote from one of the comics.

Have you ever loved something that mistreated you, father? Been used, a tool to prop up a small man’s quest to be taken seriously? Were you ever betrayed by the one soul in the world who should have cared for you? I have grieved you, father. Accepted your contempt for me and moved past it. Still, I see your reflection painted on every grotesque human face. All you ever wanted was to have an impact on the world. And so you will. The greatest impact ever felt! I will kill what is most important to your quivering ego. YOUR AUDIENCE! AND THEY WILL CURSE YOUR NAME AS THEY DIE! “Hank Pym, the genius that killed us all!”

This extreme parent/child dynamic is part of what makes Ultron such a menacing villain. It’s also a dynamic that “Avengers: Age of Ultron” glossed over with Tony talking down to Ultron, as though he were his child. While that didn’t make Ultron any less villainous, it overlooks one of the most important factors that make Ultron so dangerous.

Ideally, we would want an advanced to reflect our best traits. While cynical people might agree, we do have plenty of those. Concepts of compassion, empathy, love, hope, and understanding are among our most powerful. Even other AI characters, namely Vision and Jocasta, are capable of utilizing those traits to do immense good.

It also helps he has a kinder face.

With Ultron, his influences are less ideal. It’s not that Hank Pym doesn’t understand those concepts. He just never filtered them from his own flaws. His ego and ambition wouldn’t let him. As a result, he created a perfect storm for Ultron. His mind is patterned after a human, but his intelligence and overall capacity is increased by orders of magnitude.

If advanced artificial intelligence is to be humanity’s last invention, then that’s how it’ll start. There have already been instances where AI’s have adopted some less-than-ideal traits. Back in 2016, Microsoft had to shut down an AI chatbot after it evolved into an extreme racist troll. That wasn’t even an advanced AI, either. A truly intelligent version could become much worse and not have an off switch.

To some extent, this mirrors what occurred with Ultron in the “Avengers: Age of Ultron” movie. As soon as Ultron goes online, he scans through the vast mountain of data that humanity has compiled. Then, having been programmed by Tony Stark to bring peace, he reaches the extreme conclusion that the only path to peace is the extinction of humanity.

Could the first advanced artificial intelligence we create reach the same conclusion? It’s hard to say, at the moment. The current state of artificial intelligence is limited to specialized tasks, such as winning Jeopardy and playing chess. However, we are inching closer to creating an intelligence that is at or greater than an ordinary human. At our current pace of development, we could have one as early as 2029.

In some ways, we are in the same situation as Hank Pym when he first created Ultron. We are still developing the specifics of this powerful technology. If we program it with our best traits, it could solve some of the greatest problems we face, as a species, including ones we literally cannot contemplate with our caveman brains. If it inherits our worst traits, like Ultron, then we don’t stand a chance.

2 Comments

Filed under Artificial Intelligence, futurism, human nature, Marvel, psychology, Villains Journey