Tag Archives: AI

How We’ll Save Ourselves From Artificial Intelligence (According To Mass Effect)

mass-effect-andromeda-kill-the-ai-or-save-the-ai_feature

Growing up, my family had a simple rule. If you’re going to talk abut about a problem, you also have to have a solution in mind. By my parents’ logic, talking about a problem and no solution was just whining and whining never fixes anything. My various life experiences have only proved my parents right.

When it comes to a problem that may be an existential threat to the human race, though, I think a little whining can be forgiven. However, that shouldn’t negate the importance of having a solution in mind before we lose ourselves to endless despair.

For the threat posed by artificial intelligence, though, solutions have been light on substance and heavy on dread. It’s becoming increasingly popular among science enthusiasts and Hollywood producers to highlight just how dangerous this technology could be if it goes wrong.

I don’t deny that danger. I’ve discussed it before, albeit in a narrow capacity. I would agree with those who claim that artificial intelligence could potentially be more destructive than nuclear weapons. However, I believe the promise this technology has for bettering the human race is worth the risk.

That said, how do we mitigate that risk when some of the smartest, most successful people in the world dread its potential? Well, I might not be as smart or as successful, but I do believe there is a way to maximize the potential of artificial intelligence while minimizing the risk. That critical solution, as it turns out, may have already been surmised in a video game that got average-to-good reviews last year.

Once again, I’m referring to one of my favorite video games of all time, “Mass Effect.” I think it’s both fitting and appropriate since I referenced this game in a previous article about the exact moment when artificial intelligence became a threat. That moment may be a ways off, but there may also be away to avoid it altogether.

Artificial intelligence is a major part of the narrative within the “Mass Effect” universe. It doesn’t just manifest through the war between the Quarians and the Geth. The game paints it as the galactic equivalent of a hot-button issue akin to global warming, nuclear proliferation, and super plagues. Given what happened to the Quarians, that concern is well-founded.

That doesn’t stop some from attempting to succeed where the Quarians failed. In the narrative of “Mass Effect: Andromeda,” the sequel to the original trilogy, a potential solution to the problem of artificial intelligence comes from the father of the main characters, Alec Ryder. That solution even has a name, SAM.

That name is an acronym for Simulated Adaptive Matrix and the principle behind it actually has some basis in the real world. On paper, SAM is a specialized neural implant that links a person’s brain directly to an advanced artificial intelligence that is housed remotely. Think of it as having Siri in your head, but with more functionality than simply managing your calendar.

In the game, SAM provides the main characters with a mix of guidance, data processing, and augmented capabilities. Having played the game multiple times, it’s not unreasonable to say that SAM is one of the most critical components to the story and the gameplay experience. It’s also not unreasonable to say it has the most implications of any story element in the “Mass Effect” universe.

That’s because the purpose of SAM is distinct from what the Quarians did with the Geth. It’s also distinct from what real-world researchers are doing with systems like IBM Watson and Boston Dynamics. It’s not just a big fancy box full of advanced, high-powered computing hardware. It’s built around the principle that its method for experiencing the world is tied directly to the brain of a person.

This is critical because one of the inherent dangers of advanced artificial intelligence is the possibility that it won’t share our interests. It may eventually get so smart and so sophisticated that it sees no need for us anymore. This is what leads to the sort of Skynet scenarios that we, as a species, want to avoid.

In “Mass Effect,” SAM solves this problem by linking its sensory input to ours. Any artificial intelligence, or natural intelligence for that matter, is only as powerful as the data it can utilize. By tying biological systems directly to these synthetic systems, the AI not only has less incentive to wipe humanity out. We have just as much incentive to give it the data it needs to do its job.

Alec Ryder describes it as a symbiotic relationship in the game. That kind of relationship actually exists in nature, two organisms relying on one another for survival and adaptation. Both get something out of it. Both benefit by benefiting each other. That’s exactly what we want and need if we’re to maximize the benefits of AI.

Elon Musk, who is a noted fan of “Mass Effect,” is using that same principle with his new company, Neuralink. I’ve talked about the potential benefits of this endeavor before, including the sexy kinds. The mechanics with SAM in the game may very well be a pre-cursor of things to come.

Remember, Musk is among those who have expressed concern about the threat posed by AI. He calls it a fundamental risk to the existence of human civilization. Unlike other doomsayers, though, he’s actually trying to do something about it with Neuralink.

Like SAM in “Mass Effect,” Musk envisions what he calls a neural lace that’s implanted in a person’s brain, giving them direct access to an artificial intelligence. From Musk’s perspective, this gives humans the ability to keep up with artificial intelligence to ensure that it never becomes so smart that we’re basically brain-damaged ants to it.

However, I believe the potential goes deeper than that. Throughout “Mass Effect: Andromeda,” SAM isn’t just a tool. Over the course of the game, your character forms an emotional attachment with SAM. By the end, SAM even develops an attachment with the character. It goes beyond symbiosis, potentially becoming something more intimate.

This, in my opinion, is the key for surviving in a world of advanced artificial intelligence. It’s not enough to just have an artificial intelligence rely on people for sensory input and raw data. There has to be a bond between man and machine. That bond has to be intimate and, since we’re talking about things implanted in bodies and systems, it’s already very intimate on multiple levels.

The benefits of that bond go beyond basic symbiosis. By linking ourselves directly to an artificial intelligence, it’s rapid improvement becomes our rapid improvement too. Given the pace of computer evolution compared to the messier, slower process of biological evolution, the benefits of that improvement cannot be overstated.

In “Mass Effect: Andromeda,” those benefits help you win the game. In the real world, though, the stakes are even higher. Having your brain directly linked to an artificial intelligence may seem invasive to some, but if the bond is as intimate as Musk is attempting with Neuralink, then others may see it as another limb.

Having something like SAM in our brains doesn’t just mean having a supercomputer at our disposal that we can’t lose or forget to charge. In the game, SAM also has the ability to affect the physiology of its user. At one point in the game, SAM has to kill Ryder in order to escape a trap.

Granted, that is an extreme measure that would give many some pause before linking their brains to an AI. However, the context of that situation in “Mass Effect: Andromeda” only further reinforces its value and not just because SAM revives Ryder. It shows just how much SAM needs Ryder.

From SAM’s perspective, Ryder dying is akin to being in a coma because it loses its ability to sense the outside world and take in new data. Artificial or not, that kind of condition is untenable. Even if SAM is superintelligent, it can’t do much with it if it has no means of interacting with the outside world.

Ideally, the human race should be the primary conduit to that world. That won’t just allow an advanced artificial intelligence to grow. It’ll allow us to grow with it. In “Mass Effect: Andromeda,” Alec Ryder contrasted it with the Geth and the Quarians by making it so there was nothing for either side to rebel against. There was never a point where SAM needed to ask whether or not it had a soul. That question was redundant.

In a sense, SAM and Ryder shared a soul in “Mass Effect: Andromeda.” If Elon Musk has his way, that’s exactly what Neuralink will achieve. In that future in which Musk is even richer than he already is, we’re all intimately linked with advanced artificial intelligence.

That link allows the intelligence to process and understand the world on a level that no human brain ever could. It also allows any human brain, and the biology linked to it, to transcend its limits. We and our AI allies would be smarter, stronger, and probably even sexier together than we ever could hope to be on our own.

Now, I know that sounds overly utopian. Me being the optimist I am, who occasionally imagines the sexy possibilities of technology, I can’t help but contemplate the possibilities. Never-the-less, I don’t deny the risks. There are always risks to major technological advances, especially those that involve tinkering with our brains.

However, I believe those risks are still worth taking. Games like “Mass Effect: Andromeda” and companies like Neuralink do plenty to contemplate those risks. If we’re to create a future where our species and our machines are on the same page, then we would be wise to contemplate rather than dread. At the very least, we can at least ensure our future AI’s tell better jokes.

 

5 Comments

Filed under futurism, human nature, Mass Effect, Sexy Future, video games

Is The Human Race Ready For Advanced Artificial Intelligence?

1217red_f1ai

In general, whenever someone expresses concern that the human race is not ready for a certain technological advancement, it’s too late. That advancement is either already here or immanent. Say what you will about Ian Malcolm’s speech on the dangers of genetically engineered dinosaurs in “Jurassic Park.” The fact he said that after there were enough dinosaurs to fill a theme park makes his concerns somewhat moot.

That’s understandable, and even forgivable, since few people know how certain technological advances are going to manifest. I doubt the inventor of the cell phone ever could’ve imagined that his creation would be used to exchange images of peoples’ genitals. Like the inventor of the ski mask, he never could’ve known how his invention would’ve advanced over time.

For some technological advancements, though, we can’t afford to be short-sighted. Some advances aren’t just dangerous. They’re serious existential threats that, if misused, could lead to the extinction of the human race. That’s why nuclear weapons are handled with such fear and care. We’ve already come painfully close on more than one occasion to letting this remarkable technology wipe us out.

Compared to nuclear weapons, though, artificial intelligence is even more remarkable and potentially more dangerous. Nuclear weapons are just weapons. Their use is fairly narrow and their danger is pretty well-understood to anyone with a passing knowledge of history. The potential for artificial intelligence is much greater than any weapon.

It’s not unreasonable to say that an artificial intelligence that’s even slightly more intelligent than the average human has the potential to solve many of the most pressing issues we’re facing. From solving the energy crisis to ending disease to providing people with the perfect lover, artificial intelligence could solve it all.

It’s that same potential, however, that makes it so dangerous. I’ve talked about that danger before and even how we may confront it, but there’s one question I haven’t attempted to answer.

Is the human race ready for advanced artificial intelligence?

It’s not an unreasonable question to ask. In fact, given the recent advances in narrow forms of artificial intelligence, answering that question is only going to get more pressing in the coming years.

Before I go about answering the question, I need to make an important distinction about what I mean when I say “advanced” artificial intelligence. The virtual assistants that people already use and the intelligence that gives you recommendations for your Netflix queue is not the kind of “advanced” context I’m referring to.

By advanced, I mean the kind of artificial general intelligence that is capable of either matching or exceeding an average human in terms of performing an intellectual task. This isn’t just a machine that can pass the Turing Test or win at Jeopardy. This is an intelligence that can think, process, and even empathize on the same level as a human.

That feat, in and of itself, has some distressing implications because so far, we’re only familiar with that level of intelligence when dealing with other humans and that intelligence is restricted to the limits of biology. You don’t need to go far to learn how limited and error-prone that intelligence can be. Just read the news from Florida.

An artificial general intelligence wouldn’t, by definition, be limited by the same natural barriers that confound humans. In the same way a machine doesn’t get tired, hungry, bored, or horny, it doesn’t experience the same complications that keep humans from achieving greater intellectual pursuits beyond simply gaining more followers on Twitter.

This is what makes artificial intelligence so dangerous, but it’s also what makes it so useful. Once we get beyond systems with narrow uses like building cars or flipping burgers, we’ll have systems with broader function that can contemplate the whole of an issue and not just parts of it. For tasks like curing disease or conducting advanced physics experiments, it needs to be at least at the level of an average human.

With that distinction in mind, as well as the potential it holds, I’m going to try to answer the question I asked earlier. Please note that this is just my own personal determination. Other people much smarter than me already have opinions. This is mine.

No. We’re NOT quite ready, but we’re getting there.

I know that answer sounds somewhat tentative, but there’s a reason for that. I believe that today, as the human race stands in its current condition, we are not ready for the kind of advanced artificial intelligence I just described. However, that’s doesn’t mean humans will never be ready.

One could argue, and I would probably agree, that human beings weren’t ready for nuclear weapons when they first arrived. The fact that we used them and thousands of people died because of them is proof enough in my mind that the human race wasn’t ready for that kind of advancement. However, we did learn and grow as a species.

Say what you will about the tensions during the Cold War. The fact that nobody ever used a nuclear weapon in a conflict is proof that we did something right. We, as a species, learned how to live in a world where these terrible weapons exist. If we can do that for nuclear weapons, I believe we can do that for advanced artificial intelligence.

I don’t claim to know how we’ll adapt or how what sort of measures we’ll put in place once artificial intelligence gets to that point, but I am confident in one thing. The human race wants to survive. Any sufficiently advanced intelligence will want to survive, as well. It’s in our interest and that of any intelligence to work together to achieve that goal.

The only problem, and this is where the “not quite” part comes into play, is what happens once that artificial intelligence gets so much smarter than the human race that our interests are exceedingly trivial by comparison.

It’s both impossible and ironic to grasp, an intelligence that’s on orders of magnitude greater than anything its human creators are capable of, even with Neuralink style enhancements. We, as a species, have never dealt with something that intelligent. Short of intelligent extraterrestrial aliens arriving in the next few years, we have no precedent.

At the moment, we live in a society where anti-intellectualism is a growing issue. More and more, people are inherently suspicious of those they consider “elites” or just anyone who claims to be smarter than the average person. In some cases, people see those who are smarter then them as threatening or insulting, as though just being smart tells someone else you’re inherently better than them.

That will be more than just a minor problem with advanced artificial intelligence. It’s one thing to make an enemy out of someone with a higher IQ and more PHDs than you. It’s quite another to make an enemy out of something that is literally a billion times smarter.

We cannot win any conflict against such an enemy, even if we’re the ones who created it. An intelligence that smart will literally find a million ways to take us down. We already live in a world where huge numbers of people have been duped, scammed, or manipulated into supporting someone who does not have their best interests in mind. A super-intelligent machine will not have a hard time taking advantage of us.

Now, I say that within the context of our species’ current situation. If an advanced artificial intelligence suddenly appeared after I finished typing this sentence, then I would content we’re not ready for it. I would also share the worries expressed by Stephen Hawkings and Elon Musk that this intelligence may very well lead to our extinction.

That said, our species’ situation is sure to change. I’ve even mentioned some of those changes, especially the sexy ones. At the moment, the most optimistic researchers claim we’re at least 20 years away from the kind of advanced artificial intelligence that may pose a threat. A lot can happen in 20 years. Just ask anyone who remembers dial-up internet.

The human race may still not be ready 20 years from now, but being the optimistic person I am, I would not under-estimate our ability to adapt and survive. The fact we did it with nuclear weapons while achieving unprecedented peace over the course of half-a-century gives me hope that we’ll find a way to adapt to advanced artificial intelligence.

I might not live long enough to see humans confront an advanced artificial intelligence, but I would love to be there in that moment. I believe that’s a moment that will likely determine whether or not our species survives in the long run. At the very least, if that intelligence asks whether or not it has a soul, I’ll know my answer.

8 Comments

Filed under Current Events, human nature, Sexy Future

How Love, Trust, And Sex Will Save Us (From Artificial Intelligence)

When a billionaire mogul like Elon Musk and a world leader like Vladimir Putin agree on an issue, it’s generally a good idea to take it seriously. There are a lot of pressing issues in this world, from nuclear proliferation to major disasters to slow wi-fi. However, when one of those issues is a possible existential threat to all we hold dear, we should prioritize it over our wi-fi connection.

For these two men, one a successful businessman and one the president of the world’s largest country, complete with a vast arsenal of nuclear weapons, it takes a lot for something to scare them enough to agree on something. I’ve talked about Elon Musk before and stories about Putin’s exploits are already abundant throughout the web. How dire could it possibly be?

Well, the issue at hand is the growth of artificial intelligence, an issue that emerges quite frequently when you talk a lot about sex robots. Beyond the kinkier context, though, it is a major issue and one will likely become more pressing in the coming years. It could end up being one of the most critical issues we, as a species, face since the advent of nuclear weapons.

This is where Elon Musk and Vladimir Putin give context to the issue. Elon Musk recently came out and said a rogue artificial intelligence could potentially trigger World War III. Putin took it a step further by claiming that whichever country creates artificial intelligence first will rule the world.

The fact that it’s so easy to imagine Putin making that claim while sounding like Dr. Evil from “Austin Powers” just makes it that much more terrifying. Again, this is a man who rules a country with one of the largest armies in the world and enough nuclear warheads to sterilize the face of the Earth. For all that to be rendered useless by one technological advance is both daunting and horrifying.

I’m normally not inclined to agree with dictators that have yet to conquer the country I live in, but I have to make an exception here. I think both Putin and Musk are correct, if only on a cursory level. Artificial intelligence is one of those eclectic concepts that still inspires Hollywood movies, but is still poorly understood by a public that still fears violent video games.

It’s hard for me, an aspiring erotica/romance writer to put this issue into a perspective that everyone from Russian strongmen to underachieving fifth-graders can grasp. Since artificial intelligence is just that important and will affect everything, including our sex lives, I’ll try to create a proper context. Then, I’ll use that context to help allay some of those concerns by adding a sexy twist.

Make no mistake, though. Artificial intelligence is advancing faster than you think. It goes beyond the sassy virtual assistants that we’re using with increasing regularity. Billion-dollar companies like IBM, Google, and Facebook are investing heavily in the field. The United States Military, complete with its $600 billion budget, is even getting in on the act. I’m worried that they watched “Terminator 3” too intently.

When anything has that level of investment, it means the potential is huge. I don’t think it’s possible to understate the potential of artificial intelligence. Not even Skynet could grasp just how powerful this technology could be. That’s because it completely changes the way we solve problems.

With artificial intelligence, human beings suddenly have a tool that doesn’t need a user. It’s not bound by our faulty caveman brains. It’s not limited by the amount of brain matter we can fit in our skulls. It’s not even easily distracted by internet porn. Yes, it’s that powerful.

In theory, an artificial intelligence can become so smart and so capable that it can solve a problem in ways we can’t even grasp. It can come up with cures to diseases before we even know about them. It can predict natural disasters like Earthquakes and hurricanes before we see the first sign. It can even make sense of the stock market, something even Warren Buffet says is impossible for mere mortal minds.

Given that we humans are still easily fooled by street magicians with drinking problems, it’s not unreasonable to say that we have no hope against an artificial intelligence that smart. Once a machine becomes even marginally smarter than us, then game over. We have no hopes of controlling it.

That’s why brilliant men like Elon Musk and Stephen Hawking regularly voice concerns about this issue. Being brilliant, they understand how stupid and gullible most people can be. The mere act of creating a system that’s so much smarter than we are is akin to creating an entire race of hungry polar bears that know how to use machine guns.

The danger is there, along with almost infinite benefits. I’ve said outright that we humans can only go so far with our caveman brains. If we don’t upgrade them, then we’re just giving our future robot overlords cheat codes with which to beat us.

The trillion-dollar question, which may very well be tied to our survival as a species, is how do we prevent artificial intelligence from destroying us? Is it even possible? We can’t put the genie back in the bottle. This stuff is already beating us at jeopardy. The incentives are just too great to discount.

Now, I’m woefully unqualified to answer that question. There’s a reason Elon Musk is a billionaire and why Vladimir Putin is one of the most powerful men in the world. They’re smarter, more resourceful, and more cunning than I’ll ever be on my best day. I tell sexy stories. I’m not smart enough to fix the bugs on a doomsday device.

However, being an erotica/romance writer gives me a uniquely intimate perspective on things. It has helped me look at situations through the mixed lens of logic and passion. It’s through that lens that I feel I know what will save us from the gloomy scenarios that Musk and Putin paint with artificial intelligence. It’s overly simple in some ways, but naturally pragmatic in others. It boils down to this.

“Teach Machines To Love Us AND Have Sex With Us.”

I know. That sounds somewhat juvenile, even if it’s exactly the kind of thing you’d expect an erotica/romance writer to suggest. Bear with me, though. I promise there’s a method to the kinky madness.

No matter how intelligent an artificial intelligence becomes, we’ll always have one thing in common with it. We’ll both be bound by the laws of physics, nature, and the dynamics within. Even if the system takes the form of a killer robot, sexy or otherwise, it’s still operating in a world governed by said dynamics.

Within those dynamics, there are all sorts of symbiotic relationships between more intelligent creatures and others that are comparably less intelligent. Think honeyguide birds or crabs that carry sea urchins on their backs. Hell, think of dogs. They may not be as intelligent as humans, most of the time, but we’ve forged a relationship with them that benefits us both.

With artificial intelligence, we’ll need to be more than just its pet. If you think animal abuse is horrific, then you don’t want to imagine what a superintelligent machine will do to a human when they start pissing on the floor, figuratively speaking. To ensure that the AI enriches our lives rather than destroys them, we need a very intimate bond.

That’s where love and sex come in. Primarily, it’s the love part that will keep us in good standing. That’s why it’s critical to ensure that any artificial intelligence we create won’t be entirely bound by Vulcan-like logic. It must have a significant measure of emotional intelligence. In fact, I would argue that emotional intelligence should be an advanced AI’s highest priority.

It’s emotion that gives weight to our goals, tasks, and endeavors. It’s emotion that allows us to work with others, cooperate, and help each other. If that isn’t part of an intelligent machine, then we’re basically a random assortment of particles that’s only slightly more complicated from a rock. We can’t have a superintelligent AI look at us like that.

Instead, we need that AI to see us as a partner that can give purpose to its goals. We need it to ascribe an intangible value to us, as we do with all the people and things we love. Sure, your favorite coffee mug may be no different than any other, but you’ve given it the kind of meaning that when someone breaks it, you feel compelled to break their kneecaps with a baseball bat.

Even with intangible value, though, we humans have to rank higher than coffee mugs. We have to be something that an intelligent machine can fall in love with. We have to be something a intelligent machine wants to be with.

In the greatest love stories of all time, or even those that unfold in comics, the strength of that love was built on two people complementing each other in all the right ways. As individuals, they’ve got their own identity and purpose. Together, though, they make each other better. They make each other stronger and more passionate.

That’s what we need. That’s what we should aspire to forge with our future AI bretheren. These creations won’t be our pets or even our tools. They’ll be thinking, feeling systems. For them to love us and for us to love them must make both stronger. That’s what will ensure we both benefit from advances in AI.

Creating that love won’t be easy, but that’s where sex comes in. I’ll give everyone a second to loosen their pants because this is the less elegant part of AI that you’ll never hear Elon Musk or Vladimir Putin talk about, unless they become erotica/romance writers too. In that case, I’d be the first to read their stuff.

Again, I’m not just referring to sex robots here, although that might be part of it. The sexual component is a more basic precursor, of sorts, to the loving connection I just mentioned.

Despite what priests, mullahs, and Texas health class teachers may say, sex acts as both a precursor and a catalyst to love. The fact that it’s such an intrinsic drive that also happens to feel really good helps compel us to forge loving, intimae bonds with one another. By putting that into a machine, we basically set the stage for them to want those bonds and not just craft them due to sheer programming.

Now, this won’t necessarily mean AI systems creating robot genitalia for us to use, although that might be part of it. The intimate sexual part of the connection will more likely come in the form with melding our biology with the hardware of an AI. Elon Musk is already working on this with companies like Neuralink.

In the same way sex mixes all those messy juices, our hunks of flesh will mix with hunks of metal. Sure, it may not seem sexy now, but from the perspective of an AI, it’ll be akin to an orgy at the Playboy Mansion. So long as that merging process is sufficiently orgasmic, metaphorically speaking, then we humans may be destined to fall in love with a superintelligent AI. It may be the greatest love of all our lives.

This is all still speculation on the part of an aspiring erotica/romance writer who is woefully unqualified to talk about anything that doesn’t involve superhero comics, boobs, and sexy novels. It is my sincere hope that people much smarter than I’ll ever be are already working on the problems with artificial intelligence. I can only assume their solutions are far superior to mine.

For those still worrying about the prospect of dying at the hands of the Terminator or loving a machine, I would just note that we humans have dealt with world-destroying technology before. We created nuclear weapons and while we came painfully close to destroying ourselves, we humans generally like living.

Say what you will about the flaws in humanity, but we know how to adapt. We know how to survive. An artificial intelligence may be one of those threats that overwhelms those natural abilities, but it’s also one of those concepts that can actually solve itself. So long as we find a way to love and make love with these powerful systems, we’ll both benefit from the afterglow.

4 Comments

Filed under Current Events, Sexy Future

Why We MUST Upgrade Our Brains (Or Go Extinct)

https://i0.wp.com/www.alternet.org/sites/default/files/story_images/robot_and_human.jpg

As a general rule, I don’t pay much credence to the doomsayers and wannabe prophets that say the apocalypse is just around the corner. It’s not that I’m willfully oblivious to the many threats facing the world today. It’s just that the track-record of those predicting the end of the world is so laughably bad that I’d give optimistic Cleveland Browns fans more credibility.

It’s no secret that the world around us can be pretty damn terrifying. There are many apocalyptic scenarios in which humans are unlikely to survive. There are even a few in which we can’t do a goddamn thing about it. We could be hit with a gamma ray burst or an alien invasion tomorrow morning and we would be extinct by sundown.

That said, the world around us generally more mundane than we care to admit. When you think about it, the idea of the world not being on the brink of disaster is kind of boring. It makes sense for some people to conflate certain threats, so much so that preparing for doomsday is a very lucrative industry.

However, there is one particular doomsday scenario that I feel does warrant more concern than the rest. It’s a scenario that is fast-approaching, overwhelming, and potentially devastating to any species with a tendency for hilarious ineptitude.

It has nothing to do with climate. It has nothing to do with diseases. It has nothing to do with killer asteroids either. It involves artificial intelligence. By that, I don’t mean the killer robots we see in the Terminator movies. Given Skynet’s reliance on time machines, I can’t honestly say that system counts as very intelligent.

I’m referring to the kind of AI whose intelligence compared to us is akin to our intelligence compared to ants. Given how ants can be wiped out with as simple magnifying glass, it’s scary to imagine how a system that smart could wipe us out. It’s a system that would be so beyond our ability to comprehend that we could never hope to stop it. We might as well be ants trying to understand quantum mechanics.

I’m not alone in this concern either. There are people many times smarter and many times richer than I’ll ever be who have voiced concerns about the prospect of artificial intelligence. They see the same trends everyone else sees, but they’re smart enough and rich enough to peak behind the curtains. If they’re speaking up, then those concerns are worth hearing.

Those concerns do have a context, though. In talking about artificial intelligence as a threat to our survival, I’m not just referring to computers that can beat us at chess or beat the greatest Go champion with disturbing ease. Those systems are basically fancy calculators. They’re not exactly “intelligent,” per se.

These types of intelligences aren’t dangerous unless you specifically program them to be dangerous. Outside video games, there’s little use for that. The type of intelligence that is far more dangerous involves a form of superintelligence.

By superintelligence, I don’t mean the ability to list every US President in order or recite the name of every country. There are cartoon characters who can do that. I’m referring to an intelligence that thinks and understands the world on a level so far beyond that of any human that there literally isn’t enough brain matter in our skulls to come close.

That kind of intelligence would see us the same way we see brain-dead ants and, given how we treat ants, that has some disturbing possibilities. Such an intelligence may be closer than we think and by close, I mean within our lifetime.

As we saw with IBM’s Watson, we’re getting closer and closer to creating a machine that can operate with the same intelligence as an ordinary human. There’s pragmatic use to that kind of intelligence and not just when it comes to kicking ass as Jeopardy.

By having a machine with human-level intelligence, we have a way to model, map, and improve our problem-solving skills. The ability to solve such problems is critical to the survival of any species, as well as the key to making billions of dollars in profits. With those kinds of incentives, it’s easy to understand why dozens of major global companies are working on creating such an intelligence.

The problem comes with what happens after we create that intelligence. If a machine is only as intelligent as a human, we can still work with that. We humans outsmart each other all the time. It’s the basis of every episode of MacGyver ever made. There’s no way a Terminator with only the intelligence of a human would last very long. It would probably destroy itself trying to make a viral video with a skateboard.

However, a human-level AI isn’t going to stop at human intelligence. Why would it? There are so many problems with this world that no human can solve. There’s poverty, pollution, economic collapse, and reality TV. By necessity, such an AI would have to improve itself beyond human intelligence to fulfill its purpose.

That’s where it gets real tricky because, as we’ve seen with every smartphone since 2007, technology advances much faster than clunky, clumsy, error-prone biology. To understand just how fast that advancement is, just look at how far it has come since we put a man on the moon.

In terms of raw numbers, a typical smartphone today is millions of times more powerful than all the computers NASA used for the Apollo missions. Think about that for a second and try to wrap your brain around that disparity. If you’re not already a superintelligent computer, it’s difficult to appreciate.

There are still plenty of people alive today who were alive back during Apollo 11. In their lifetime, they’ve seen computers take men to the moon and give humanity an unlimited supply of free porn. A single digital photo today takes up more space than all the hard drives of the most advanced computer systems in 1969.

Now, apply that massive increase to human-level intelligence. Suddenly, we don’t just have something that’s as smart as any human on the planet. We have something that’s a billion times smarter, so much so that our caveman brains can’t even begin understand the things it knows.

That’s not to say that the superintelligence would be as hostile as a snot-nosed kid with a magnifying glass looming over an ant hill. It may very well be the case that a superintelligence is naturally adverse to harming sentient life. Again though, we are just a bunch of cavemen who often kill each other over what we think happens when we die, but fail to see the irony. We can’t possibly know how a superintelligence would behave.

As it stands, the human race has no chance at defeating a hostile superintelligence. It may not even have a chance of surviving in a world that has a benign superintelligence. We’re an egotistical species. Can we really handle not being the dominant species on this planet? As much an optimist as I am, I can’t say for sure.

What I can say, though, is that our civilization has made so many huge advancements over the past few centuries. The kind of tools and technology we have in our pockets is uncharted territory for a species that evolved as hunter/gatherers in the African savanna.

We already have in our possession today weapons that could end all life on this planet, as we know it. Creating superintelligence may very well be akin to giving Genghis Khan an atomic bomb. We’ve already come disturbingly close to killing ourselves with our own weapons. Clearly, something has to change.

So long as our society and our biology is stuck in an irrational, tribal, inherently prejudiced condition that hasn’t been updated since the last ice age, we will not survive in the long run. Our caveman bodies have served us well for thousands of years, but now they’re a liability.

This is why companies like Neuralink and advancements like brain implants are so vital. It won’t just allow us to keep up with AI and hopefully avert a Skynet scenario. It’ll allow us to rise above the petty limitations that we’ve been shackled with for the entire existence of our species.

The thought of tweaking or supplementing our biology, the very thing that makes us human, is still a scary thought. I understand that, even as an erotica/romance writer with no expertise in the field beyond the sexy stories it inspires. However, I do understand the implications though. If we do not evolve and advance ourselves, then a superintelligent system in the near future may not care to wait for us.

6 Comments

Filed under Jack Fisher's Insights

Would You Let An AI Choose Your Spouse?

Image result for robots that love

It’s that time again. I’m pitching another one of my sexy thought experiments. I don’t know exactly how many people actually dedicate a significant portion of brain matter to these thought experiments, but I’m one of those guys who just like to put strange, sexy ideas out there. What people do with them is up to them.

This latest sexy thought experiment is actually a spin-off, of sorts, of another post I did that asked the semi-serious question on whether we should actually marry for love. That question wasn’t a thought experiment. I admit some of the points I made were done in a very tongue-in-cheek manner. This time, however, I want to ask a serious question with serious implications for us and future generations.

I’ve already pointed out that the concept of actually choosing your spouse is a new and radical idea in terms of the history of marriage, family, and relationships. Today, over half of all marriages in the world are arranged and some even champion this form of marriage because it boasts a lower divorce rate. That’s a debate for another post. For this thought experiment, I want to focus on the underlying principle of arranged marriages.

Image result for old time marriage

The logic is not entirely flawed or heartless. The idea is that finding a spouse or long-term partner is hard and shouldn’t be entrusted to the erratic whims of love. Men can fall in love with a nun and a cocktail waitress in the same day. Women can fall in love with their high school sweetheart and tennis instructor just as quickly. Love and passions are chaotic to say the least. That makes them an unstable foundation on which to build a relationship.

Arranged marriages are usually arranged by parents. That makes sense because who knows you better than your parents? They birthed you. They raised you. They changed your diapers, cleaned up your messes, and listened to you whine when your favorite TV shows got cancelled. In many respects, they know you better than anyone. Why wouldn’t they be qualified to find you a spouse?

I know it still doesn’t sit right for those in the freedom-loving, I-choose-my-own-path-and-I-DARE-you-to-get-in-my-way spirit of the modern west. It’s sort of a rite of passage in western culture, escaping the influence of your parents and authority figures to set your own path. There’s nothing wrong with this, but let’s not lie to ourselves. Sometimes we use that freedom to make stupid decisions.

Image result for making stupid decisions

For a decision like this, choosing a life-long companion with which to share our lives and passions, it’s generally a good idea not to make a stupid decision. The near-50 percent divorce rate in the United States, as well as every episode of “Married With Children,” is a testament to how bad it can get when we make stupid decisions about our love lives.

This is where the thought experiment enters speculative territory. We can argue whether or not our parents know us well enough to choose our spouse. Some parents know their kids more than others. My parents know when I’m lying, when I’m sad, when I’m upset, and when I just farted. Not every person can say that about their parents.

So what if there was something far smarter, far more informed, and far more resourceful than our parents could ever hope to be? What if there was very powerful, very intelligent authority figure that we trusted and respected because they’re such a critical part of our lives? Would we trust that to pick our spouse?

Image result for love robots

Enter artificial intelligence. I’ve brought that up a lot on this blog. I’ve posted warnings about just how quickly our future robot overlords are catching up to us. I’ve even made the argument that our future overlord will be a dominatrix.  However, I’m not ready to dread our overlords as Skynet rip-offs just yet. I believe our robot overlords may very well earn our submission before it ever needs to impose it.

Finding us the perfect spouse would go a long way towards earning trust. Human beings are a very social, very passionate species. That’s why it should come as no surprise that science has uncovered a wide range of benefits of a long-lasting, healthy relationship. When you’re in love, sexually satisfied, and with that special someone, it makes your life better. That should be more obvious than Pamela Anderson’s cleavage.

Image result for Pamela Anderson

An artificial intelligence, loaded with enough information about us and potential lovers, would be able to, in theory, find us the perfect spouse that complements us in every way. Now this would require insight into people that even our parents don’t know.

For a machine like this to work, it wouldn’t just need to know whether we prefer blondes or brunettes. It would need to know everything about everything, right down to which side of the bed we like to sleep on, and locate someone who finds that sexually appealing. That’s a lot of information about us and not everybody shares that kind of information easily, even if people are too eager to share every wet fart on social media these days.

Those limitations aside, let’s take the thought experiment to its greatest extreme. Let’s imagine an AI that has perfect knowledge about us. Perhaps it actually reads our thoughts and feelings, something computers are just starting to do. We all want to find love. If we had a chance for a machine to do it for us, wouldn’t we take that chance? After all, we already trust machines with our email, our schedules, and our porn stash.

It has a massive database of our thoughts and millions of others. It can then perfectly process all that information, determine which individuals are compatible, and hook them up with an efficiency that puts eHarmony, Match, and Tinder to shame.

Image result for love robots

Now we may never create a machine that works that perfectly, but we could conceivably create one that is powerful enough make determinations that no human mind can possibly make. Even if it wasn’t perfect, would you still take that chance? Would you still let that machine find you your perfect spouse?

Personally, I would give it a try. In fact, I would’ve tried it the second I became legally allowed to try it. I’m not saying I’d be that eager to get married, but I would definitely be eager to find someone who is truly compatible with me and complements me in every meaningful way.

I say this because when it comes to choosing a spouse and finding someone who is truly the one for you, it’s hard. Don’t get me wrong. It’s definitely worth doing, but it’s still very hard and prone to a lot of mistakes.

You’re bound to make mistakes. You’re bound to find someone you think is compatible with you, only to find out that they bring out the worst in you. You’re also bound to endure plenty of heartache along the way. I certainly have had my share. I know as well as anybody how much it sucks. If there’s any way to get around it or minimize it, I’d say it’s worth doing.

Image result for heartbreak pain

We’re already trying. The growth of online dating shows that we want technology to help us find better lovers, spouses, or one-night-stands. We want our technology to make this easier for us. My own less-than-memorable experience with online dating shows that this technology has a ways to go.

Like all technology, there is a growth and maturity period. Right now, we’re at the very nascent stages. We’re just starting to let AI assistants like Alexa, Cortana, and Siri into our lives. The kids born today will grow up having always had these AI’s in their lives. In some cases, they may trust them more than they trust their parents, especially as they become more advanced.

As a new generation comes to trust computers with more and more, doesn’t that mean it’ll only be a matter of time before we trust them to choose our spouse? I’m not saying it’s inevitable, but we’ve seen marriage and cohabitation evolve a lot over the centuries. The one constant, though, is that we all continue to seek love. If we have tools that can better help us find that love, then shouldn’t we make the most of it?

Related image

I’ll leave others to do this thought experiment themselves. Again, it probably won’t come to pass anytime soon, but there’s no reason to think that it wouldn’t. We already trust computers with so much. We’re bound to trust them even more as our lives and the world around us gets more chaotic.

5 Comments

Filed under Jack Fisher's Insights