In general, I don’t pay much attention to doomsayers who claim the end is near and we should all cower, tremble, and give them our credit card number. Don’t get me wrong. I still believe there are serious existential threats facing humanity today. Some are worth taking seriously and dedicating resources to addressing. Others are not. Some simply require a more balanced perspective.
There’s a long list of failed apocalyptic predictions. The fact we’re surviving and thriving by most measures shows just how resilient, adaptable, and capable humanity is. There are some threats that I believe humanity will eventually resolve, thanks largely to our accelerating progress in science, technology, and logistics.
Others, however, have me more concerned. While some are more immediate than others, one in particular continues to confound me, as well as some of the smartest people in the world. It involves artificial intelligence, an emerging technology that’s as promising as it is unpredictable. Given the complexity of this technology, it’s difficult to explain in totality, but it can be best summed up by one simple question.
How do you regulate artificial intelligence?
That’s not a rhetorical question. It’s not a thought experiment either. It’s a serious, honest question that people far smarter and far more capable than me are just starting to ask.
Elon Musk is one of them. Very recently, he called for more regulation on artificial intelligence. That, alone, should be both telling and worrying. This man is a billionaire. Usually, billionaires are more inclined advocate removing regulations. Whenever they make an exception, that’s a sign they know it’s serious.
Even though Musk is one of the top advocates for solving big problems with technology, he still has concerns about the problems associated with artificial intelligence. In AI circles, it’s often called the control problem. It’s not a very creative name, but it gets the point across.
How do you control something that is potentially as smart, if not smarter than a human?
How do you manage something that thinks, adapts, and evolves faster than any machine or living thing?
How do you regulate an artificial intelligence that was built by humans, but isn’t at all human?
These are all difficult questions to contemplate, let alone legislate. Even Musk doesn’t provide specifics. Chances are he doesn’t know any more than the rest of the non-billionaire population. That’s a problem because if we’re going to try and regulate this technology, we need to understand it. On top of that, politicians and lawmakers have a long and embarrassing history of failing to understand technology.
However, this isn’t just about writing laws that protect citizens from being exploited by tech companies. Artificial intelligence, especially the kind that exceeds human intelligence, has capabilities that go beyond sending text messages from bathroom stalls. If handled improperly, it wouldn’t just be an existential threat. It could destroy humanity in ways we literally cannot contemplate.
Now, I try to be an optimist in most things involving emerging technology. Humanity has found a way to manage dangerous technology before, namely with nuclear weapons. However, artificial intelligence is a different beast entirely. Regulating it isn’t as easy as simply controlling the materials that make it. The very concept of regulating this technology lacks precedent.
The closest we have to date is Isaac Asimov’s famous three laws of robotics, which were introduced in 1942. Asimov was a brilliant writer and very ahead of his time on some concepts, but this is one issue where we need more than just three simple tenants. We need to think bigger and bolder. If we don’t, then an advanced artificial intelligence will quickly leave us behind.
After that, it won’t matter what kind of regulations we try to pass. It’ll be smart enough to circumvent them. That doesn’t mean humanity is doomed at that point, but we’ll be distressingly vulnerable. I know it’s in our nature to procrastinate on things we don’t see as vital, but if ever there was an issue to make an exception, this is it.