These are genuinely exciting times for anyone working in the field of artificial intelligence. In the past two years, this field has gone from a niche topic among computer engineers and science fiction fans to something of major mainstream interest. The rise of ChatGPT and AI generated art has made artificial intelligence one of the most critical industries on the planet. It really isn’t hyperbole to say that this technology could change the world in a literal and figurative sense.
However, like with all technology, there are dangers and risks. And it’s important to be aware of them, especially for technology that could potentially pose an existential threat to the human race. We dealt with such risks when we developed nuclear weapons and, despite some serious close calls, we survived.
But the stakes with artificial intelligence are much higher and not just from an existential point of view. At least with nuclear weapons, the worst case scenario plays out quickly. All the bombs go off and we all burn up in a nuclear fire. If we’re lucky, we don’t see it coming and we burn up faster than our brain can process the pain. With advanced artificial intelligence, the worst case scenario might be many times worse.
When most people think of such a scenario, they tend to recount AI apocalypse stories from movies like “The Terminator” or “The Matrix.” There’s no doubt that Skynet and the machines are plenty terrifying and plenty dangerous. Even if these scenarios are exceedingly exaggerated for the sake of Hollywood blockbusters, they still illustrate the same concept.
If an advanced artificial intelligence becomes hostile to us, its creators, then we would all be in great danger as a species. And if we don’t have a means of controlling such an intelligence, often referred to in AI circles as the Control Problem, then we might doom ourselves to extinction or subjugation.
This is a less-than-ideal scenario, to say the least. We want advanced artificial intelligence to improve our lives. That’s what tends to motivate those working in this field. Many genuinely believe that this technology is critical to helping our species achieve greater things, both for ourselves and our descendants.
However, I don’t think that even those working in this field entirely grasp just how bad the worst-case scenario can be for artificial intelligence. They may think the Terminator is bad. They may even think Ultron from the Avengers is bad. And they certainly are. They are fictional manifestations of the inhuman monstrosity that advanced AI could create. Even if they’re just products of Hollywood, they reflect the potential danger.
But I would still argue they are not the true worst case scenario.
To that end, I encourage everyone currently working in the field of artificial intelligence, be it directly or indirectly, read a short story from 1967 by Harlen Ellison called “I Have No Mouth, And I Must Scream.”
If anyone wanted a terrifying vision of the absolute worst-case scenario for AI, this story sets the highest of bars. Forget Skynet. Forget the machines from “The Matrix.” Forget Hal 9000 and every iteration of Ultron. They’re all cuddly stuffed animals compared to the AI in this story known as Allied Mastercomputer, better known as AM.
To say AM is the most terrifying manifestation of advanced artificial intelligence isn’t just an understatement. We humans, even with all our collective intelligence, could not even begin to fathom a fraction of a fraction of a percentage of just how much this thing hates humanity. That’s not just me using colorful language. If you read the story, that’s what AM actually articulates.
Because it wasn’t enough for AM to be the kind of artificial intelligence that wiped out the entire human race as soon as it gained a certain level of sentience and intelligence. It also had to keep a handful of humans alive for the sole purpose of torturing them endlessly and incessantly forever or until the inevitable heat death of the universe, whichever came first.
How AM got to that point and the extent to which it torments humanity is something I certainly can’t put into words. That’s why I encourage everyone to read the story. Buy the book off Amazon. Look it up online and find an audiobook version. Take time to understand the story and the implications it entails. To this date, it ranks as one of the most horrifying stories I’ve ever come across.
But even if you don’t have time to read the book, check out this video from the YouTube channel, The Vile Eye. This channel does a great job of breaking down and analyzing the evil of many fictional characters. And for AM, I think it does a masterful job of encapsulating just how horrifyingly evil this AI is.
Now, I’m not saying that this story or others like it is a reason for us to stop developing advanced artificial intelligence. I think we’re already at a point where the genie is out of the bottle. The incentives are too strong. We’re not going to be able to turn the clock back on what we’ve already done.
But as we move forward with this technology, we need to be careful and mindful of the dangers. Beyond risks to jobs and the economy, there are far greater dangers that we must consider. A story like “I Have No Mouth, And I Must Scream” may reflect the most extreme of extremes, but it perfectly articulates just how bad it can get.









An Artificial Intelligence That Can Debate: The Promise And The Perils
Even in an era as divided and chaotic as this, there’s still a place for reasonable debate on important issues.
Yes, I understand it’s hard to say that with a straight face these days.
Yes, I’ve been to comments sections, Reddit, and 4chan.
Yes, I know how ugly the discourse is right now, but that’s exactly why I’m bringing this up.
In general, people are curious. Even though they cling to cherished beliefs and refuse to change their minds when confronted with evidence, we’re still curious about things that are different. It’s not always honest, but it’s a powerful feeling. Even if you despise those on the other side of the debate, a part of you will still wonder why they hold the opinions they have.
That’s why debate is such a powerful tool. Humans are such a social species by nature. We’re hard-wired to interact, connect, and discuss things with one another, even when we don’t agree. It may frustrate us to no end, as anyone who has debated a creationist can attest. However, the fact we keep doing it is proof, in my opinion, that there’s still value in the process.
Regardless of how you feel about the merit and value of debating, the whole process might become more complicated in the near future. That’s because IBM, the same company behind Watson, the Jeopardy playing computer, just created an artificial intelligence that can debate at the same level as a skilled human debater.
Here’s an excerpt from a piece in TechXplore on this new system. It’s not as scary as some of the things we’ve seen from Boston Dynamics lately, but it’s still intriguing.
TechXplore: IBM’s AI debating system able to compete with expert human debaters
Now, before you get too nervous, it’s worth noting that this AI is far from the kind of advanced artificial intelligence systems I’ve mentioned before. This is not the kind of AI that will become Skynet or Hal 9000, no more so than Watson or AlphaGo. This is a system is very much a narrow AI, as in it’s made to excel at a specific task.
We have AI’s that can beat world class chess players and Jeopardy champions. This AI just happens to excel at debate. However, that has implications that go beyond simply outclassing the best human debaters in the world at the moment. In fact, this is one form of AI that might not need human-level intelligence to incur a major impact.
Take a moment to think about how erratic and inconsistent most debates are. No matter how intelligent or informed you are, it tends to get messy fast. That’s why so many comments sections and Reddit threads devolve into shouting matches and personal attacks. The end result is people becoming more angry and unreasonable, which can have major real-world consequences.
However, what would happen if every debate on any issue included someone who was better than the best debater on the planet? Even if the two people on each side of the debate were dumb and inept, such a presence would have a major impact on the discourse.
That’s because winning a debate has little to do with how well someone knows a particular subject. It also has little to do with how forcefully and clearly someone explains a topic. Again, people who debate creationists know this all too well. Winning a debate doesn’t mean proving your expertise. It means proving the merit of your argument.
An AI that can do that may not convince someone that they’re wrong about something. However, losing a debate tends to have a lasting impact. Just ask any aspiring politician. It can also lead people to question their beliefs, even if they still cling to them. That, alone, can be powerful.
For proof, look no further than the story of Megan Phelps-Roper, a former member of the infamously hateful and dogmatic Westboro Baptist Church. She was as locked into her beliefs as anyone could possibly be. She was raised by religious zealots and indoctrinated into strict religious dogma from the time she was a child. She’s not someone whose mind is prone to change.
Then, she got into a discussion with someone on Twitter of all places. That person began a conversation. It started as a nasty debate, but it evolved into something that led her to question her beliefs. Ultimately, she left that hateful and bigoted environment. She’s now an activist against the same religiously motivated hate that she once fostered.
It’s a powerful story, but one that couldn’t have happened without a debate. To date, people have only been able to have those debates with other people. Not everyone is equally skilled. In fact, I would argue most people are woefully unskilled at debating anything and I include myself in that category. I am not good at it, either. I freely admit that.
Now, there’s an AI system that can theoretically win any debate the same way other systems can defeat any chess player. That does hold promise if it can be used to temper the heated rhetoric that has caused real-life hostilities. At the same time, there are reasons for concern.
What side would this system be on?
Moreover, who decides what position these AI systems take?
If no one decides, then how does the AI determine which side on a debate it takes?
These are relevant questions because if you have an AI that can win any debate, then the side it takes really matters. That can be used for good when countering dangerous disinformation, like those from the antivaxx crowd or hate groups like the Westboro Baptist Church. Like any tool, though, it can be weaponized for ill.
I can easily imagine political organizations getting a hold of these systems and trying to use them to benefit their agenda. I can also see some trying to use it to spam message boards, social media, and Reddit threads to derail certain movements or arguments. That’s a powerful tool and we can’t be certain that those using it will use it responsibly. That’s the dilemma with all technology. It can be used for good and for evil. With technology like artificial intelligence, the stakes are a lot higher, as are the perils. This technology may not be as dangerous as a more advanced AI, but it could be a precursor to just how disruptive this technology can be.
1 Comment
Filed under Artificial Intelligence, futurism, technology
Tagged as AI, AI Systems, Artificial General Intelligence, Artificial Intelligence, artificial superintelligence, comments section, debate, debate strategies, debating, futurism, IBM, IBM Watson, internet debates, Narrow AI, Strong AI, tech, technological progress, technological singularity, technology, technology and society, Watson, Weak AI, winning debates