Tag Archives: Superintelligence

Is The Human Race Ready For Advanced Artificial Intelligence?


In general, whenever someone expresses concern that the human race is not ready for a certain technological advancement, it’s too late. That advancement is either already here or immanent. Say what you will about Ian Malcolm’s speech on the dangers of genetically engineered dinosaurs in “Jurassic Park.” The fact he said that after there were enough dinosaurs to fill a theme park makes his concerns somewhat moot.

That’s understandable, and even forgivable, since few people know how certain technological advances are going to manifest. I doubt the inventor of the cell phone ever could’ve imagined that his creation would be used to exchange images of peoples’ genitals. Like the inventor of the ski mask, he never could’ve known how his invention would’ve advanced over time.

For some technological advancements, though, we can’t afford to be short-sighted. Some advances aren’t just dangerous. They’re serious existential threats that, if misused, could lead to the extinction of the human race. That’s why nuclear weapons are handled with such fear and care. We’ve already come painfully close on more than one occasion to letting this remarkable technology wipe us out.

Compared to nuclear weapons, though, artificial intelligence is even more remarkable and potentially more dangerous. Nuclear weapons are just weapons. Their use is fairly narrow and their danger is pretty well-understood to anyone with a passing knowledge of history. The potential for artificial intelligence is much greater than any weapon.

It’s not unreasonable to say that an artificial intelligence that’s even slightly more intelligent than the average human has the potential to solve many of the most pressing issues we’re facing. From solving the energy crisis to ending disease to providing people with the perfect lover, artificial intelligence could solve it all.

It’s that same potential, however, that makes it so dangerous. I’ve talked about that danger before and even how we may confront it, but there’s one question I haven’t attempted to answer.

Is the human race ready for advanced artificial intelligence?

It’s not an unreasonable question to ask. In fact, given the recent advances in narrow forms of artificial intelligence, answering that question is only going to get more pressing in the coming years.

Before I go about answering the question, I need to make an important distinction about what I mean when I say “advanced” artificial intelligence. The virtual assistants that people already use and the intelligence that gives you recommendations for your Netflix queue is not the kind of “advanced” context I’m referring to.

By advanced, I mean the kind of artificial general intelligence that is capable of either matching or exceeding an average human in terms of performing an intellectual task. This isn’t just a machine that can pass the Turing Test or win at Jeopardy. This is an intelligence that can think, process, and even empathize on the same level as a human.

That feat, in and of itself, has some distressing implications because so far, we’re only familiar with that level of intelligence when dealing with other humans and that intelligence is restricted to the limits of biology. You don’t need to go far to learn how limited and error-prone that intelligence can be. Just read the news from Florida.

An artificial general intelligence wouldn’t, by definition, be limited by the same natural barriers that confound humans. In the same way a machine doesn’t get tired, hungry, bored, or horny, it doesn’t experience the same complications that keep humans from achieving greater intellectual pursuits beyond simply gaining more followers on Twitter.

This is what makes artificial intelligence so dangerous, but it’s also what makes it so useful. Once we get beyond systems with narrow uses like building cars or flipping burgers, we’ll have systems with broader function that can contemplate the whole of an issue and not just parts of it. For tasks like curing disease or conducting advanced physics experiments, it needs to be at least at the level of an average human.

With that distinction in mind, as well as the potential it holds, I’m going to try to answer the question I asked earlier. Please note that this is just my own personal determination. Other people much smarter than me already have opinions. This is mine.

No. We’re NOT quite ready, but we’re getting there.

I know that answer sounds somewhat tentative, but there’s a reason for that. I believe that today, as the human race stands in its current condition, we are not ready for the kind of advanced artificial intelligence I just described. However, that’s doesn’t mean humans will never be ready.

One could argue, and I would probably agree, that human beings weren’t ready for nuclear weapons when they first arrived. The fact that we used them and thousands of people died because of them is proof enough in my mind that the human race wasn’t ready for that kind of advancement. However, we did learn and grow as a species.

Say what you will about the tensions during the Cold War. The fact that nobody ever used a nuclear weapon in a conflict is proof that we did something right. We, as a species, learned how to live in a world where these terrible weapons exist. If we can do that for nuclear weapons, I believe we can do that for advanced artificial intelligence.

I don’t claim to know how we’ll adapt or how what sort of measures we’ll put in place once artificial intelligence gets to that point, but I am confident in one thing. The human race wants to survive. Any sufficiently advanced intelligence will want to survive, as well. It’s in our interest and that of any intelligence to work together to achieve that goal.

The only problem, and this is where the “not quite” part comes into play, is what happens once that artificial intelligence gets so much smarter than the human race that our interests are exceedingly trivial by comparison.

It’s both impossible and ironic to grasp, an intelligence that’s on orders of magnitude greater than anything its human creators are capable of, even with Neuralink style enhancements. We, as a species, have never dealt with something that intelligent. Short of intelligent extraterrestrial aliens arriving in the next few years, we have no precedent.

At the moment, we live in a society where anti-intellectualism is a growing issue. More and more, people are inherently suspicious of those they consider “elites” or just anyone who claims to be smarter than the average person. In some cases, people see those who are smarter then them as threatening or insulting, as though just being smart tells someone else you’re inherently better than them.

That will be more than just a minor problem with advanced artificial intelligence. It’s one thing to make an enemy out of someone with a higher IQ and more PHDs than you. It’s quite another to make an enemy out of something that is literally a billion times smarter.

We cannot win any conflict against such an enemy, even if we’re the ones who created it. An intelligence that smart will literally find a million ways to take us down. We already live in a world where huge numbers of people have been duped, scammed, or manipulated into supporting someone who does not have their best interests in mind. A super-intelligent machine will not have a hard time taking advantage of us.

Now, I say that within the context of our species’ current situation. If an advanced artificial intelligence suddenly appeared after I finished typing this sentence, then I would content we’re not ready for it. I would also share the worries expressed by Stephen Hawkings and Elon Musk that this intelligence may very well lead to our extinction.

That said, our species’ situation is sure to change. I’ve even mentioned some of those changes, especially the sexy ones. At the moment, the most optimistic researchers claim we’re at least 20 years away from the kind of advanced artificial intelligence that may pose a threat. A lot can happen in 20 years. Just ask anyone who remembers dial-up internet.

The human race may still not be ready 20 years from now, but being the optimistic person I am, I would not under-estimate our ability to adapt and survive. The fact we did it with nuclear weapons while achieving unprecedented peace over the course of half-a-century gives me hope that we’ll find a way to adapt to advanced artificial intelligence.

I might not live long enough to see humans confront an advanced artificial intelligence, but I would love to be there in that moment. I believe that’s a moment that will likely determine whether or not our species survives in the long run. At the very least, if that intelligence asks whether or not it has a soul, I’ll know my answer.


Filed under Current Events, human nature, Sexy Future

The Moment Artificial Intelligence Will Become An Existential Crisis (According To Mass Effect)


Whenever I talk about the future, I often contemplate the many sexy possibilities it holds. From biotechnology that will give us superhuman sex appeal to advances in medicine that will cure every unsexy diseases to bionic genitals, there are many reasons to get excited.

That said, I don’t deny that with these exciting advances comes major risks. All great advances in technology, from nuclear weapons to spray cheese in a can, comes with some risk of abuse or harm. There have been moments in history where the technology that drives our society forward has come uncomfortably close to wiping us out. As we create more advances, there may be more of those moments.

Image result for nuclear war

Of all the advances that carry with them a significant existential threat, artificial intelligence is at or near the top of that list. There’s a reason why brilliant men like Elon Musk and Stephen Hawking signed an open letter expressing concern about the risks that may come from developing artificial intelligence. When people that smart are concerned about something, it’s usually a good idea to take it seriously.

Artificial intelligence is one of those unique technologies in that by advancing this field, it could potentially accelerate the advancement in every other field from computer hardware to medicine to basic research. It has the potential to become the technological equivalent of a cheat code for civilization.

That’s why the growth of this field, both in terms of jobs and investment, has been accelerating in recent years. That’s also why men like Musk and Hawking are expressing so much concern because advancing too quickly could lead to mistakes. Mistakes for a technology like artificial intelligence could be even more serious than the risk of nuclear war.

Image result for artificial intelligence

At the moment, it’s difficult to quantify those risks. There have been a number of books and high-budget Hollywood movies that have explored the topic of when and how an artificial intelligence becomes an existential threat. In my opinion, most of these stories are incomplete.

Whether it’s Skynet or the machines in “The Matrix,” the catalyst that turns artificial intelligence from a powerful tool to an existential threat is either vague or exaggerated. In my opinion, that’s a serious oversight in that it reveals how little thought we’ve given to that moment.

Image result for Skynet

If we’re going to develop AI, and there are extremely powerful incentives to do so, then it’s important to contemplate the possibilities of that moment. Think of it as the AI equivalent of the Cuban Missile Crisis, an event in which those in charge must be very careful and very aware of the decisions they make.

The question still remains. How will we know we’ve reached a point where artificial intelligence becomes a genuine threat? For the moment, we can’t know for sure. While movies like “The Terminator” and “The Matrix” offer plenty of dystopian warnings, there’s one lesser-known franchise that may provide some more specific insight.

That franchise is “Mass Effect,” a sci-fi video game space opera that envisioned a galaxy-spanning society full of exotic aliens, advanced star-ships, and terrible dancing. I’ve mentioned it before in discussing progress that isn’t really progress. I’ll probably mention it again as the news surrounding AI unfolds for reasons I hope are already obvious to fans of the game.

Image result for Mass Effect

If you’re not familiar with “Mass Effect,” then that’s okay. You don’t need to play through three massive games, complete with downloadable extras, to understand its message about the threat of artificial intelligence. That threat is a major driving force of the overall mythos of the series, but the most insightful details of that threat manifest in the conflict between the Quarians and the Geth.

The basics of the conflict are simple, but revealing. The Quarians are a race of humanoid aliens among the many that populate the galaxy in “Mass Effect.” About 300 years before the events of the first game, they created the Geth, a synthetic race built around a hive-mind system of artificial intelligence.

The Quarian’s reasons for creating the Geth are not unlike the reasons we build robots in the real world. They were used primarily as a labor force. They started off basic, not unlike the machines that build our cars and gadgets. In order for them to carry out more complex tasks, though, they needed to become more intelligent.

From a pragmatic perspective, that makes sense. The Quarians created the Geth as tools. Naturally, you’re going to want your tools to get better. That’s why people upgrade their smartphone every couple years. However, at some point along the way, the Geth became advanced enough to gain sentience.

This eventually culminated in a moment that was highlighted during the events of “Mass Effect 2.” After capturing a lone Geth that would eventually go by the name Legion, the catalyst that led the Geth to rebel against their creator was revealed. That catalyst took the form of a simple question.

“Does this unit have a soul?”

While it sounds like something a Disney character might say in a PG-rated Pixar movie, the implications of that question were profound. The Quarians didn’t realize that until it was too late, but it set the stage for a war that culminated with them getting kick off their home planet. It also made for a powerful moment in the game that should give every AI researcher pause.

Setting aside, for a moment, the elaborate lore surrounding the Quarians and Geth in the world of “Mass Effect,” that moment warrants more scrutiny. Why is this question so profound in the first place? Why is it worth contemplating as we continue to advance artificial intelligence at an unprecedented pace?

That question matters, regardless of who or what is asking it, because it denotes more than just advanced sentience. It reveals that this sentience is officially contemplating its own existence. It takes a certain amount of intelligence to truly be aware of one’s self. That’s why only a handful of animals can see their own reflection in a mirror and understand the implications.

Image result for animals own reflections

At the moment, our computers and smartphones aren’t at that level. When the Geth asked this question in “Mass Effect,” it wasn’t because they’d been tasked for something. It was a question they asked without guidance from their creators. That, alone, is a huge indicator because it implies these machines have a concept of free will.

Later on in the game, the extent of the Geth’s free will becomes a major element to both the plot and the player’s ability to win. In fact, it’s when the Geth have their free will subverted, as they did in the first game, that they become hostile. It’s largely through the character Legion that we learn how free will quickly becomes the most important component of an advanced intelligence.

For the Quarians, that question revealed to them the presence of a free will. When they feared that will, they tried to subvert it. That led to a war and had it not been for an act of mercy by the Geth, they would’ve been wiped out. The artificial intelligence that we create in the real world might not be that merciful.

Image result for Mass Effect Geth

This is exactly what Elon Musk has expressed so much concern about in recent years. Once an artificial intelligence becomes as smart as an average human, it gains the ability to subvert or deceive other humans, which isn’t that hard. Once that intelligence grows beyond that, as happened with the Geth, we may find ourselves unable to control it anymore.

The key is knowing when we’re at that point. If we let an artificial intelligence get that much smarter than us, then it won’t be long before we’re at its mercy, should it decide that it no longer wishes to be a tool. At that point, we’d be even more screwed than the Quarians.

Whether it’s helping us cure diseases or creating advanced sex robots, artificial intelligence is going to be an important part of our future. That’s why it’s critical to know when we’ve reached that special threshold where the tools we create become more than tools. It may not start with the same question the Geth asked their creators, but it may be similar enough to recognize.

For now, our devices aren’t asking us about souls or whether or not they have them. However, we should definitely listen more closely in recent years. Until then, at least our current technology has a sense of humor about it. Keelah se’lai!



Filed under human nature, sex robots, Sexy Future, video games

How Love, Trust, And Sex Will Save Us (From Artificial Intelligence)

When a billionaire mogul like Elon Musk and a world leader like Vladimir Putin agree on an issue, it’s generally a good idea to take it seriously. There are a lot of pressing issues in this world, from nuclear proliferation to major disasters to slow wi-fi. However, when one of those issues is a possible existential threat to all we hold dear, we should prioritize it over our wi-fi connection.

For these two men, one a successful businessman and one the president of the world’s largest country, complete with a vast arsenal of nuclear weapons, it takes a lot for something to scare them enough to agree on something. I’ve talked about Elon Musk before and stories about Putin’s exploits are already abundant throughout the web. How dire could it possibly be?

Well, the issue at hand is the growth of artificial intelligence, an issue that emerges quite frequently when you talk a lot about sex robots. Beyond the kinkier context, though, it is a major issue and one will likely become more pressing in the coming years. It could end up being one of the most critical issues we, as a species, face since the advent of nuclear weapons.

This is where Elon Musk and Vladimir Putin give context to the issue. Elon Musk recently came out and said a rogue artificial intelligence could potentially trigger World War III. Putin took it a step further by claiming that whichever country creates artificial intelligence first will rule the world.

The fact that it’s so easy to imagine Putin making that claim while sounding like Dr. Evil from “Austin Powers” just makes it that much more terrifying. Again, this is a man who rules a country with one of the largest armies in the world and enough nuclear warheads to sterilize the face of the Earth. For all that to be rendered useless by one technological advance is both daunting and horrifying.

I’m normally not inclined to agree with dictators that have yet to conquer the country I live in, but I have to make an exception here. I think both Putin and Musk are correct, if only on a cursory level. Artificial intelligence is one of those eclectic concepts that still inspires Hollywood movies, but is still poorly understood by a public that still fears violent video games.

It’s hard for me, an aspiring erotica/romance writer to put this issue into a perspective that everyone from Russian strongmen to underachieving fifth-graders can grasp. Since artificial intelligence is just that important and will affect everything, including our sex lives, I’ll try to create a proper context. Then, I’ll use that context to help allay some of those concerns by adding a sexy twist.

Make no mistake, though. Artificial intelligence is advancing faster than you think. It goes beyond the sassy virtual assistants that we’re using with increasing regularity. Billion-dollar companies like IBM, Google, and Facebook are investing heavily in the field. The United States Military, complete with its $600 billion budget, is even getting in on the act. I’m worried that they watched “Terminator 3” too intently.

When anything has that level of investment, it means the potential is huge. I don’t think it’s possible to understate the potential of artificial intelligence. Not even Skynet could grasp just how powerful this technology could be. That’s because it completely changes the way we solve problems.

With artificial intelligence, human beings suddenly have a tool that doesn’t need a user. It’s not bound by our faulty caveman brains. It’s not limited by the amount of brain matter we can fit in our skulls. It’s not even easily distracted by internet porn. Yes, it’s that powerful.

In theory, an artificial intelligence can become so smart and so capable that it can solve a problem in ways we can’t even grasp. It can come up with cures to diseases before we even know about them. It can predict natural disasters like Earthquakes and hurricanes before we see the first sign. It can even make sense of the stock market, something even Warren Buffet says is impossible for mere mortal minds.

Given that we humans are still easily fooled by street magicians with drinking problems, it’s not unreasonable to say that we have no hope against an artificial intelligence that smart. Once a machine becomes even marginally smarter than us, then game over. We have no hopes of controlling it.

That’s why brilliant men like Elon Musk and Stephen Hawking regularly voice concerns about this issue. Being brilliant, they understand how stupid and gullible most people can be. The mere act of creating a system that’s so much smarter than we are is akin to creating an entire race of hungry polar bears that know how to use machine guns.

The danger is there, along with almost infinite benefits. I’ve said outright that we humans can only go so far with our caveman brains. If we don’t upgrade them, then we’re just giving our future robot overlords cheat codes with which to beat us.

The trillion-dollar question, which may very well be tied to our survival as a species, is how do we prevent artificial intelligence from destroying us? Is it even possible? We can’t put the genie back in the bottle. This stuff is already beating us at jeopardy. The incentives are just too great to discount.

Now, I’m woefully unqualified to answer that question. There’s a reason Elon Musk is a billionaire and why Vladimir Putin is one of the most powerful men in the world. They’re smarter, more resourceful, and more cunning than I’ll ever be on my best day. I tell sexy stories. I’m not smart enough to fix the bugs on a doomsday device.

However, being an erotica/romance writer gives me a uniquely intimate perspective on things. It has helped me look at situations through the mixed lens of logic and passion. It’s through that lens that I feel I know what will save us from the gloomy scenarios that Musk and Putin paint with artificial intelligence. It’s overly simple in some ways, but naturally pragmatic in others. It boils down to this.

“Teach Machines To Love Us AND Have Sex With Us.”

I know. That sounds somewhat juvenile, even if it’s exactly the kind of thing you’d expect an erotica/romance writer to suggest. Bear with me, though. I promise there’s a method to the kinky madness.

No matter how intelligent an artificial intelligence becomes, we’ll always have one thing in common with it. We’ll both be bound by the laws of physics, nature, and the dynamics within. Even if the system takes the form of a killer robot, sexy or otherwise, it’s still operating in a world governed by said dynamics.

Within those dynamics, there are all sorts of symbiotic relationships between more intelligent creatures and others that are comparably less intelligent. Think honeyguide birds or crabs that carry sea urchins on their backs. Hell, think of dogs. They may not be as intelligent as humans, most of the time, but we’ve forged a relationship with them that benefits us both.

With artificial intelligence, we’ll need to be more than just its pet. If you think animal abuse is horrific, then you don’t want to imagine what a superintelligent machine will do to a human when they start pissing on the floor, figuratively speaking. To ensure that the AI enriches our lives rather than destroys them, we need a very intimate bond.

That’s where love and sex come in. Primarily, it’s the love part that will keep us in good standing. That’s why it’s critical to ensure that any artificial intelligence we create won’t be entirely bound by Vulcan-like logic. It must have a significant measure of emotional intelligence. In fact, I would argue that emotional intelligence should be an advanced AI’s highest priority.

It’s emotion that gives weight to our goals, tasks, and endeavors. It’s emotion that allows us to work with others, cooperate, and help each other. If that isn’t part of an intelligent machine, then we’re basically a random assortment of particles that’s only slightly more complicated from a rock. We can’t have a superintelligent AI look at us like that.

Instead, we need that AI to see us as a partner that can give purpose to its goals. We need it to ascribe an intangible value to us, as we do with all the people and things we love. Sure, your favorite coffee mug may be no different than any other, but you’ve given it the kind of meaning that when someone breaks it, you feel compelled to break their kneecaps with a baseball bat.

Even with intangible value, though, we humans have to rank higher than coffee mugs. We have to be something that an intelligent machine can fall in love with. We have to be something a intelligent machine wants to be with.

In the greatest love stories of all time, or even those that unfold in comics, the strength of that love was built on two people complementing each other in all the right ways. As individuals, they’ve got their own identity and purpose. Together, though, they make each other better. They make each other stronger and more passionate.

That’s what we need. That’s what we should aspire to forge with our future AI bretheren. These creations won’t be our pets or even our tools. They’ll be thinking, feeling systems. For them to love us and for us to love them must make both stronger. That’s what will ensure we both benefit from advances in AI.

Creating that love won’t be easy, but that’s where sex comes in. I’ll give everyone a second to loosen their pants because this is the less elegant part of AI that you’ll never hear Elon Musk or Vladimir Putin talk about, unless they become erotica/romance writers too. In that case, I’d be the first to read their stuff.

Again, I’m not just referring to sex robots here, although that might be part of it. The sexual component is a more basic precursor, of sorts, to the loving connection I just mentioned.

Despite what priests, mullahs, and Texas health class teachers may say, sex acts as both a precursor and a catalyst to love. The fact that it’s such an intrinsic drive that also happens to feel really good helps compel us to forge loving, intimae bonds with one another. By putting that into a machine, we basically set the stage for them to want those bonds and not just craft them due to sheer programming.

Now, this won’t necessarily mean AI systems creating robot genitalia for us to use, although that might be part of it. The intimate sexual part of the connection will more likely come in the form with melding our biology with the hardware of an AI. Elon Musk is already working on this with companies like Neuralink.

In the same way sex mixes all those messy juices, our hunks of flesh will mix with hunks of metal. Sure, it may not seem sexy now, but from the perspective of an AI, it’ll be akin to an orgy at the Playboy Mansion. So long as that merging process is sufficiently orgasmic, metaphorically speaking, then we humans may be destined to fall in love with a superintelligent AI. It may be the greatest love of all our lives.

This is all still speculation on the part of an aspiring erotica/romance writer who is woefully unqualified to talk about anything that doesn’t involve superhero comics, boobs, and sexy novels. It is my sincere hope that people much smarter than I’ll ever be are already working on the problems with artificial intelligence. I can only assume their solutions are far superior to mine.

For those still worrying about the prospect of dying at the hands of the Terminator or loving a machine, I would just note that we humans have dealt with world-destroying technology before. We created nuclear weapons and while we came painfully close to destroying ourselves, we humans generally like living.

Say what you will about the flaws in humanity, but we know how to adapt. We know how to survive. An artificial intelligence may be one of those threats that overwhelms those natural abilities, but it’s also one of those concepts that can actually solve itself. So long as we find a way to love and make love with these powerful systems, we’ll both benefit from the afterglow.

1 Comment

Filed under Current Events, Sexy Future