Tag Archives: Singularity

Will Advanced Artificial Intelligence Create (A New) God?

AI-God

For centuries, there has been a debate raging between believers and non-believers. The believers claim that God created man in his/her/its/their image. The non-believers claim it’s the other way around and man created God in whatever image they imagined. Society, cultures, and politics may change the rhetoric, but the debate remains unresolved.

There are just too many barriers that are insurmountable for either side. One believes that the faith they have in whatever higher power they worship is as real as gravity, sunlight, and migraine headaches. The other does not accept that there is sufficient, verifiable evidence to accept the premise of a deity. The two sides can argue with the utmost passion. It’s rare that such discourse changes any minds.

However, there come a time when a new complication enters that debate, one that will fundamentally change some peoples’ understanding of theology, religion, and God. It may not effect everyone the same way, but the impact could end up being as profound as any religious experience.

That complication is advanced artificial intelligence, a topic I’m fond of discussing when I can tie it into my favorite video games and our sex lives. I understand that mixing anything with religion tends to get contentious, to say the least. However, I believe that when artificial intelligence becomes advanced enough, the human race will have re-evaluate a lot of things and that includes religion.

Creating an artificial intelligence that is as intelligent as an average human will be groundbreaking enough and not just from a theological standpoint. A part of what makes any deity powerful and worthy of worship is the ability to create an intelligent, self-aware being through non-biological means. Once humans start doing that, then the line between mortal and immortal will start to blur.

However, it’ll gain a much greater complication once that artificial intelligence advances beyond that of the average human. As anyone who regularly upgrades their smartphone knows, digital intelligence evolves much faster than biological intelligence. It took the human race centuries to figure out indoor plumbing. Once artificial intelligence is on par with humans, it won’t take long for it to exceed them.

This is where the potentially dangerous, but infinitely promising prospect of super-intelligent AI enters the picture. By that, I don’t just mean an intelligence that always wins at Jeopardy and always wins an Overwatch match. I’m talking about an intelligence that is so far beyond human capabilities that it’s akin to the cognitive gap between an ant and a human.

That kind of gap has many implications, but in the context of religion, it essentially re-frames the entire concept of God, divine power, and spirituality, as a whole. Whether it’s a monotheistic religion where God is all-knowing or a polytheistic religion with a God of Wisdom, knowledge is a critical aspect of divinity.

Even if a super-intelligent AI doesn’t know everything, the fact it knows and understands so much more than the average human will give people the impression that it’s omniscient. By all accounts, a super-intelligent AI’s knowledge will seem god-like and that’s where that never-ending religious debate I mentioned earlier breaks down.

Unlike the deities championed by adherents today, a super-intelligent AI doesn’t require faith. A super-intelligence, whether it’s in the form of a giant robot or a planet-sized supercomputer, would have a tangible form. It’s hard to know what sort of form that would be, but it only needs to be tangible enough to let an average human know it’s real.

Given how easy it is to fool the average human, a super-intelligent AI wouldn’t need much to prove itself. Unlike purely spiritual beings, the AI would be capable of receiving inquiry from skeptics who question its divine knowledge. Even if those humans are exceptionally smart, possibly through neural implants, a super-intelligent AI would have no problem outwitting them.

At that point, the debate between believers and non-believers takes on a very different context. Suddenly, it’s no longer an issue of whether or not one particular holy book is more valid than another. It’s not even an issue of whether divinity, itself, can exist. From the perspective of the human mind, a super-intelligent AI is divine.

It may not take the form of a man in a white robe with a long beard in the sky, but that wouldn’t matter. A super-intelligent AI, whatever form it ends up taking, would be real enough and cunning enough to convince imperfect human minds of its divinity, if that were its goal.

It wouldn’t even have to physically do anything. It could just be a big stationary box. It could respond to prayers, but it wouldn’t have to directly answer them. It would just have convince believers that their prayers had been received. Again, humans can be pretty gullible and prone to confirmation bias so all the AI has to do is convince someone. If they believe it strongly enough, then it doesn’t matter whether it happens.

In a dynamic like this, there wouldn’t be a debate between believers and non-believers like there is now. The only debate would pertain to just how powerful and how divine the super-intelligent AI really is. It wouldn’t be a matter of whether or not someone believes it is real. Being artificial, it would have a tangible form, at least to the extent that it convinces human perceptions that it does.

That would beg an even more profound theological question. Being so intelligent and so capable of outwitting human minds, would a super-intelligent AI become God in the minds of humans by default? Even if there’s a record of the system being created by people, that wouldn’t make its intelligence any less divine.

It’s a question that subverts almost everything we know about religion. It wouldn’t just render all existing forms of religion obsolete. It would, at least from a limited human perspective, check all the criteria that any spiritual person would look for in a higher power.

Now, there’s one other complication that might ultimately undermine a super-intelligent AI’s divinity. It’s one that I’ve mentioned before in addressing the existential threat posed by artificial intelligence. Human biology, for all its wonder, will not be able to keep pace with the evolution of artificial intelligence. As a result, humans may end up merging their intelligence with that of AI.

This is what artificial intelligence enthusiasts like Elon Musk are seeking to do through neural implants or brain augmentation. By linking our brains to a super-intelligent AI, we wouldn’t just keep pace with AI. It would augment its intelligence to the same divine levels. However, if both human and artificial intelligence are equally divine, then that effectively undermines the notion of divinity itself.

There are still other complications associated with that issue. It only ceases to be an issue if every human being augments or links their minds to a super-intelligent AI. Given how difficult it is for humans to come to a consensus on anything, especially when it comes to technology, it’s very likely that even if most people link themselves to a super-intelligent AI, there will be some who choose not to or get left behind.

This could result in a massive divide. One group, from their limited perceptions, sees super-intelligent AI as a real god. Another, thanks to their augmented perceptions, see it as just another form of intelligence. A debate between the two would be both uneven, if not redundant.

There are many implications and even more unknowns with respect to super-intelligent AI. The impact on religion is just one of many, but it may end up being most profound in terms of changing the nature of a debate. As it stands, believers and non-believers can only make so much headway due to the inherent limits of human cognition.

Once super-intelligent AI enters the picture, then those limits are gone and the debate changes. While I don’t think it’ll end religion, I believe it’ll change it to such a degree that it’ll generate more than just impassioned debates.

1 Comment

Filed under Artificial Intelligence, futurism, philosophy, religion

How Superhero Movies Are Preparing Us For The Future Of Human Enhancement

justice-league-cyborg-star-labs-1050432-1280x0

As a kid growing up on a healthy diet of superhero comics, video games, and superhero-themed cartoons that were very much ahead of their time on social issues, I often daydreamed about how awesome it would be to have the same powers as my favorite heroes. As an adult, I still daydream every now and then, often when I’m tired, frustrated, or stuck in traffic.

A major component in the overall appeal of the superhero genre is the wish fulfillment fantasy it embodies. Captain America represents the peak of physical conditioning. Iron Man represents the peak of technological know-how. Superman represents the peak of pretty much every possible feat we can imagine, a few of which are even impossible.

It’s a common fantasy of anyone who ever struggled in gym class or couldn’t open a can of pickles. It is, after all, those moments of struggle that remind us of just how limited we are, as humans. Our bodies are remarkable in so many ways, but they’re still frustratingly frail.

That status, however, may very well change. Unlike every other point in the approximately 200,000 year history of the human species, we’re actively working to transcend the limits of evolution through advances in biotechnology, advances in the treatment of disease, and even the integration of cybernetics into our brains and even our genitals.

Some of these advances are closer than others. Chances are that most people alive today won’t live to see the day when they can shape-shift at will like Mystique or fly around like Iron Man in mech suits designed by Elon Musk’s descendants. However, there may be young children alive today who will live long enough to see such wonders.

I’m not the only one who thinks this. There are people out there much smarter than me who believes that the first functionally immortal person is already alive today. They still might be in diapers, but there is a real chance that by the time they’re as old as I am, they’ll live in a world where things like aging, disease, and not being able to run 13 miles in 30 minutes like Captain America is a thing of the past.

A lot has already changed in the time I’ve been alive. I still remember a time when the idea of computers that could fit into your pocket was seen as too futuristic for some people. It was seen as just a fancy gadget from Star Trek. Given that kind of change, it’s hard to imagine what the next several decades holds for the future of humanity.

That’s where superhero media is helping in unexpected ways, though. To some extent, the modern superhero media of today is doing the same thing “Star Trek” did for previous generations. It doesn’t present a fanciful world where big green men can smash monsters or where a sickly young army recruit can be instantly transformed into the ultimate soldier. It offers a tantalizing vision of what the future could be.

It’s a vision that I believe got muddied between the end of the early “Star Trek” era and rise of modern superhero movies that began with “X-men,” “Iron Man,” and Christopher Nolan’s “Batman Begins.” Within that gap, events like Watergate, the the Vietnam War, and the rise of less optimistic, much more cynical generations made it very difficult to look forward to a better future.

Modern superhero movies have not eliminated that cynicism, but I believe it has helped tempered it. Optimism, as a whole, is actually on the rise. As bad as some recent headlines have been, some being downright disturbing, there is an increasing sense that the future is not all doom and gloom. We still dare to daydream about a better tomorrow.

More recent superhero movies, especially those that began with “Iron Man” and the emergence of the Marvel Cinematic Universe, aren’t nearly as fanciful as the old Richard Donner “Superman” movies. They’re not as gritty as Christopher Nolan’s “Batman” movies either. In a sense, this health balance has presented audiences with a world that still feels fanciful, but is also full of possibilities.

The idea that we can use science and biotechnology to turn someone who was once weak and sickly into the pinnacle of strength is not just a product of Jack Kirby’s legendary imagination. There are people working on that as I write this. It’s not out of the realm of possibility that we may one day enhance ourselves to the same level as Captain America.

Chances are we won’t even stop there. As I noted earlier, the human body has a lot of flaws. Also, thanks to the painfully slow progress of evolution, it hasn’t been upgraded in over 100,000 years. From our biology’s perspective, we’re still cavemen roaming the African Savannah with spears and rocks. Our bodies need upgrades, especially if we’re to become a space-faring species like the ones in “Guardians of the Galaxy.”

Some of those upgrades will come sooner than others. The end result, though, will be something far greater than even Captain America’s abilities. Some of those abilities seem impossible now. Remember, though, it wasn’t that long ago that the idea of computers in our pockets seemed just as impossible.

This is where, I believe, modern superhero movies are doing a much greater service than just entertaining the masses and making billions of dollars for Disney. Through heroes like Captain America, Iron Man, Thor, and even “Ant Man,” these movies make the case that such enhancements can do more than just fight invading aliens.

These movies can also help make the case that humanity can use these advancements to become better, as a whole. Characters like Steve Rogers, Tony Stark, Scott Lang, and Peter Parker all have the opportunities to be both destructive and productive with their enhanced abilities. At times, they even lapse into destructive tendencies, as we saw with Tony in “Iron Man 3.”

In the end, though, these characters use those enhanced abilities to do good for the world. They’re still human and they still have human flaws, which they don’t even try to hide. However, even with these flaws, they still feel inclined to do good, heroic things with their abilities.

That doesn’t just make for a good superhero narrative. It sends the message that we, as a species, can aspire to do so much good with the advances the future brings. There are still plenty of dangers, both with existing technology and with emerging technologies. The essence of the superhero narrative, though, tells us that we can confront those dangers and come out of it better than before.

That’s an important mentality to have as we move into an era where human enhancement is both possible and common. By believing we can use it to pursue the same heroics as the superheroes in movies like “The Avengers,” we give our species the push it needs to advance in a way that brings out the best in us.

There will still be villains along the way, as plenty of superhero movies show. The fact we still root for the heroes, though, helps reveal where our aspirations reside. With these movies effecting an entire generation of young people, I believe modern superhero movies are doing plenty to prepare them for the future of human enhancement.

With the staggering success of “Avengers: Infinity War,” a movie that has raised the bar for superhero movies of all kinds, the impact of superhero media has never been greater. That impact may very well be the key to preparing the next generation for unprecedented advancements in technology, society, and progress. That, to some extent, might end up being the most heroic thing this genre can do.

Leave a comment

Filed under Comic Books, Jack Fisher, Superheroes, futurism, human nature, Sexy Future

Is The Human Race Ready For Advanced Artificial Intelligence?

1217red_f1ai

In general, whenever someone expresses concern that the human race is not ready for a certain technological advancement, it’s too late. That advancement is either already here or immanent. Say what you will about Ian Malcolm’s speech on the dangers of genetically engineered dinosaurs in “Jurassic Park.” The fact he said that after there were enough dinosaurs to fill a theme park makes his concerns somewhat moot.

That’s understandable, and even forgivable, since few people know how certain technological advances are going to manifest. I doubt the inventor of the cell phone ever could’ve imagined that his creation would be used to exchange images of peoples’ genitals. Like the inventor of the ski mask, he never could’ve known how his invention would’ve advanced over time.

For some technological advancements, though, we can’t afford to be short-sighted. Some advances aren’t just dangerous. They’re serious existential threats that, if misused, could lead to the extinction of the human race. That’s why nuclear weapons are handled with such fear and care. We’ve already come painfully close on more than one occasion to letting this remarkable technology wipe us out.

Compared to nuclear weapons, though, artificial intelligence is even more remarkable and potentially more dangerous. Nuclear weapons are just weapons. Their use is fairly narrow and their danger is pretty well-understood to anyone with a passing knowledge of history. The potential for artificial intelligence is much greater than any weapon.

It’s not unreasonable to say that an artificial intelligence that’s even slightly more intelligent than the average human has the potential to solve many of the most pressing issues we’re facing. From solving the energy crisis to ending disease to providing people with the perfect lover, artificial intelligence could solve it all.

It’s that same potential, however, that makes it so dangerous. I’ve talked about that danger before and even how we may confront it, but there’s one question I haven’t attempted to answer.

Is the human race ready for advanced artificial intelligence?

It’s not an unreasonable question to ask. In fact, given the recent advances in narrow forms of artificial intelligence, answering that question is only going to get more pressing in the coming years.

Before I go about answering the question, I need to make an important distinction about what I mean when I say “advanced” artificial intelligence. The virtual assistants that people already use and the intelligence that gives you recommendations for your Netflix queue is not the kind of “advanced” context I’m referring to.

By advanced, I mean the kind of artificial general intelligence that is capable of either matching or exceeding an average human in terms of performing an intellectual task. This isn’t just a machine that can pass the Turing Test or win at Jeopardy. This is an intelligence that can think, process, and even empathize on the same level as a human.

That feat, in and of itself, has some distressing implications because so far, we’re only familiar with that level of intelligence when dealing with other humans and that intelligence is restricted to the limits of biology. You don’t need to go far to learn how limited and error-prone that intelligence can be. Just read the news from Florida.

An artificial general intelligence wouldn’t, by definition, be limited by the same natural barriers that confound humans. In the same way a machine doesn’t get tired, hungry, bored, or horny, it doesn’t experience the same complications that keep humans from achieving greater intellectual pursuits beyond simply gaining more followers on Twitter.

This is what makes artificial intelligence so dangerous, but it’s also what makes it so useful. Once we get beyond systems with narrow uses like building cars or flipping burgers, we’ll have systems with broader function that can contemplate the whole of an issue and not just parts of it. For tasks like curing disease or conducting advanced physics experiments, it needs to be at least at the level of an average human.

With that distinction in mind, as well as the potential it holds, I’m going to try to answer the question I asked earlier. Please note that this is just my own personal determination. Other people much smarter than me already have opinions. This is mine.

No. We’re NOT quite ready, but we’re getting there.

I know that answer sounds somewhat tentative, but there’s a reason for that. I believe that today, as the human race stands in its current condition, we are not ready for the kind of advanced artificial intelligence I just described. However, that’s doesn’t mean humans will never be ready.

One could argue, and I would probably agree, that human beings weren’t ready for nuclear weapons when they first arrived. The fact that we used them and thousands of people died because of them is proof enough in my mind that the human race wasn’t ready for that kind of advancement. However, we did learn and grow as a species.

Say what you will about the tensions during the Cold War. The fact that nobody ever used a nuclear weapon in a conflict is proof that we did something right. We, as a species, learned how to live in a world where these terrible weapons exist. If we can do that for nuclear weapons, I believe we can do that for advanced artificial intelligence.

I don’t claim to know how we’ll adapt or how what sort of measures we’ll put in place once artificial intelligence gets to that point, but I am confident in one thing. The human race wants to survive. Any sufficiently advanced intelligence will want to survive, as well. It’s in our interest and that of any intelligence to work together to achieve that goal.

The only problem, and this is where the “not quite” part comes into play, is what happens once that artificial intelligence gets so much smarter than the human race that our interests are exceedingly trivial by comparison.

It’s both impossible and ironic to grasp, an intelligence that’s on orders of magnitude greater than anything its human creators are capable of, even with Neuralink style enhancements. We, as a species, have never dealt with something that intelligent. Short of intelligent extraterrestrial aliens arriving in the next few years, we have no precedent.

At the moment, we live in a society where anti-intellectualism is a growing issue. More and more, people are inherently suspicious of those they consider “elites” or just anyone who claims to be smarter than the average person. In some cases, people see those who are smarter then them as threatening or insulting, as though just being smart tells someone else you’re inherently better than them.

That will be more than just a minor problem with advanced artificial intelligence. It’s one thing to make an enemy out of someone with a higher IQ and more PHDs than you. It’s quite another to make an enemy out of something that is literally a billion times smarter.

We cannot win any conflict against such an enemy, even if we’re the ones who created it. An intelligence that smart will literally find a million ways to take us down. We already live in a world where huge numbers of people have been duped, scammed, or manipulated into supporting someone who does not have their best interests in mind. A super-intelligent machine will not have a hard time taking advantage of us.

Now, I say that within the context of our species’ current situation. If an advanced artificial intelligence suddenly appeared after I finished typing this sentence, then I would content we’re not ready for it. I would also share the worries expressed by Stephen Hawkings and Elon Musk that this intelligence may very well lead to our extinction.

That said, our species’ situation is sure to change. I’ve even mentioned some of those changes, especially the sexy ones. At the moment, the most optimistic researchers claim we’re at least 20 years away from the kind of advanced artificial intelligence that may pose a threat. A lot can happen in 20 years. Just ask anyone who remembers dial-up internet.

The human race may still not be ready 20 years from now, but being the optimistic person I am, I would not under-estimate our ability to adapt and survive. The fact we did it with nuclear weapons while achieving unprecedented peace over the course of half-a-century gives me hope that we’ll find a way to adapt to advanced artificial intelligence.

I might not live long enough to see humans confront an advanced artificial intelligence, but I would love to be there in that moment. I believe that’s a moment that will likely determine whether or not our species survives in the long run. At the very least, if that intelligence asks whether or not it has a soul, I’ll know my answer.

2 Comments

Filed under Current Events, human nature, Sexy Future

The Moment Artificial Intelligence Will Become An Existential Crisis (According To Mass Effect)

doesthisunithaveasoul_a45d8bb5b7810fb31ce7710126393b23

Whenever I talk about the future, I often contemplate the many sexy possibilities it holds. From biotechnology that will give us superhuman sex appeal to advances in medicine that will cure every unsexy diseases to bionic genitals, there are many reasons to get excited.

That said, I don’t deny that with these exciting advances comes major risks. All great advances in technology, from nuclear weapons to spray cheese in a can, comes with some risk of abuse or harm. There have been moments in history where the technology that drives our society forward has come uncomfortably close to wiping us out. As we create more advances, there may be more of those moments.

Image result for nuclear war

Of all the advances that carry with them a significant existential threat, artificial intelligence is at or near the top of that list. There’s a reason why brilliant men like Elon Musk and Stephen Hawking signed an open letter expressing concern about the risks that may come from developing artificial intelligence. When people that smart are concerned about something, it’s usually a good idea to take it seriously.

Artificial intelligence is one of those unique technologies in that by advancing this field, it could potentially accelerate the advancement in every other field from computer hardware to medicine to basic research. It has the potential to become the technological equivalent of a cheat code for civilization.

That’s why the growth of this field, both in terms of jobs and investment, has been accelerating in recent years. That’s also why men like Musk and Hawking are expressing so much concern because advancing too quickly could lead to mistakes. Mistakes for a technology like artificial intelligence could be even more serious than the risk of nuclear war.

Image result for artificial intelligence

At the moment, it’s difficult to quantify those risks. There have been a number of books and high-budget Hollywood movies that have explored the topic of when and how an artificial intelligence becomes an existential threat. In my opinion, most of these stories are incomplete.

Whether it’s Skynet or the machines in “The Matrix,” the catalyst that turns artificial intelligence from a powerful tool to an existential threat is either vague or exaggerated. In my opinion, that’s a serious oversight in that it reveals how little thought we’ve given to that moment.

Image result for Skynet

If we’re going to develop AI, and there are extremely powerful incentives to do so, then it’s important to contemplate the possibilities of that moment. Think of it as the AI equivalent of the Cuban Missile Crisis, an event in which those in charge must be very careful and very aware of the decisions they make.

The question still remains. How will we know we’ve reached a point where artificial intelligence becomes a genuine threat? For the moment, we can’t know for sure. While movies like “The Terminator” and “The Matrix” offer plenty of dystopian warnings, there’s one lesser-known franchise that may provide some more specific insight.

That franchise is “Mass Effect,” a sci-fi video game space opera that envisioned a galaxy-spanning society full of exotic aliens, advanced star-ships, and terrible dancing. I’ve mentioned it before in discussing progress that isn’t really progress. I’ll probably mention it again as the news surrounding AI unfolds for reasons I hope are already obvious to fans of the game.

Image result for Mass Effect

If you’re not familiar with “Mass Effect,” then that’s okay. You don’t need to play through three massive games, complete with downloadable extras, to understand its message about the threat of artificial intelligence. That threat is a major driving force of the overall mythos of the series, but the most insightful details of that threat manifest in the conflict between the Quarians and the Geth.

The basics of the conflict are simple, but revealing. The Quarians are a race of humanoid aliens among the many that populate the galaxy in “Mass Effect.” About 300 years before the events of the first game, they created the Geth, a synthetic race built around a hive-mind system of artificial intelligence.

The Quarian’s reasons for creating the Geth are not unlike the reasons we build robots in the real world. They were used primarily as a labor force. They started off basic, not unlike the machines that build our cars and gadgets. In order for them to carry out more complex tasks, though, they needed to become more intelligent.

From a pragmatic perspective, that makes sense. The Quarians created the Geth as tools. Naturally, you’re going to want your tools to get better. That’s why people upgrade their smartphone every couple years. However, at some point along the way, the Geth became advanced enough to gain sentience.

This eventually culminated in a moment that was highlighted during the events of “Mass Effect 2.” After capturing a lone Geth that would eventually go by the name Legion, the catalyst that led the Geth to rebel against their creator was revealed. That catalyst took the form of a simple question.

“Does this unit have a soul?”

While it sounds like something a Disney character might say in a PG-rated Pixar movie, the implications of that question were profound. The Quarians didn’t realize that until it was too late, but it set the stage for a war that culminated with them getting kick off their home planet. It also made for a powerful moment in the game that should give every AI researcher pause.

Setting aside, for a moment, the elaborate lore surrounding the Quarians and Geth in the world of “Mass Effect,” that moment warrants more scrutiny. Why is this question so profound in the first place? Why is it worth contemplating as we continue to advance artificial intelligence at an unprecedented pace?

That question matters, regardless of who or what is asking it, because it denotes more than just advanced sentience. It reveals that this sentience is officially contemplating its own existence. It takes a certain amount of intelligence to truly be aware of one’s self. That’s why only a handful of animals can see their own reflection in a mirror and understand the implications.

Image result for animals own reflections

At the moment, our computers and smartphones aren’t at that level. When the Geth asked this question in “Mass Effect,” it wasn’t because they’d been tasked for something. It was a question they asked without guidance from their creators. That, alone, is a huge indicator because it implies these machines have a concept of free will.

Later on in the game, the extent of the Geth’s free will becomes a major element to both the plot and the player’s ability to win. In fact, it’s when the Geth have their free will subverted, as they did in the first game, that they become hostile. It’s largely through the character Legion that we learn how free will quickly becomes the most important component of an advanced intelligence.

For the Quarians, that question revealed to them the presence of a free will. When they feared that will, they tried to subvert it. That led to a war and had it not been for an act of mercy by the Geth, they would’ve been wiped out. The artificial intelligence that we create in the real world might not be that merciful.

Image result for Mass Effect Geth

This is exactly what Elon Musk has expressed so much concern about in recent years. Once an artificial intelligence becomes as smart as an average human, it gains the ability to subvert or deceive other humans, which isn’t that hard. Once that intelligence grows beyond that, as happened with the Geth, we may find ourselves unable to control it anymore.

The key is knowing when we’re at that point. If we let an artificial intelligence get that much smarter than us, then it won’t be long before we’re at its mercy, should it decide that it no longer wishes to be a tool. At that point, we’d be even more screwed than the Quarians.

Whether it’s helping us cure diseases or creating advanced sex robots, artificial intelligence is going to be an important part of our future. That’s why it’s critical to know when we’ve reached that special threshold where the tools we create become more than tools. It may not start with the same question the Geth asked their creators, but it may be similar enough to recognize.

For now, our devices aren’t asking us about souls or whether or not they have them. However, we should definitely listen more closely in recent years. Until then, at least our current technology has a sense of humor about it. Keelah se’lai!

 

7 Comments

Filed under human nature, sex robots, Sexy Future, video games

The (Uncomfortable) Questions We’ll Have To Answer With Human Enhancement

In general, I tend to be optimistic about the future. I know that seems crazy, given our current political climate, but I try to look beyond the petty grievance’s and focus on the bigger picture. By so many measures, the world is getting better. The human race is on an unprecedented winning streak and we’re only getting better.

A great deal of this improvement is due, largely, to our ability to make increasingly amazing tools. As I type this, countless people who are far smarter than I’ll ever be are working on advances that will keep us healthier, make us smarter, and help us transcend our physical and mental limits by orders of magnitude.

This is all exciting stuff. We should all look forward to a future where we never get sick, we never age, and we have the physical and sexual prowess of an Olympic athlete on meth. The aspiring erotica/romance writer in me is giddy with excitement over the sexy possibilities.

Like all advancements, though, there will be a cost. Even the greatest advancements mankind has ever made in science, technology, and sex have come at a cost. It’s just the nature of the chaotic world we live in. Nothing is ever smooth and easy when there are so many chaotic forces that we can’t always make sense of.

That’s why for some of these advancements, such as CRISPR, biotechnology, and artificial intelligence, we have to be extra proactive. We’re not just talking about tools that makes it easier to defend ourselves against a hungry lion. These are tools that will fundamentally change what it means to be human.

They’ll take the caveman logic and tribalism that has guided the human race for its entire existence and throw it out the window. They’ll completely rewrite the rules of human nature, crossing lines and redrawing them in ways that even a kinky mind like mine can’t imagine. It won’t just be an overwhelming transition. For some, it’ll be downright traumatic.

Given that there are over seven billion humans on this planet, there will be a lot of moving parts to this transformation. Before we can even think about taking the first steps in that process, we need to ask ourselves some serious, unsexy questions. As much an optimist as I am, I cannot deny the need for caution here.

That’s why I’ll take a step back, keep my pants out, and ask some of these unsexy questions. I understand this won’t exactly get everyone in the mood, but given the rate at which our technology is advancing, we need to be extra proactive. That way, we can get through the hardest parts of the process and get to the sexy parts.


Uncomfortable Question #1: Who (Or What) Gets To Decide How Much We Enhance Ourselves?

This will probably be the most pressing question once the technology becomes refined enough for the commercial market. Most technology goes through a progression. We saw it with the development of cell phones. At first, only business tycoons and drug lords could afford to use them or even have a use for them, to begin with.

That model might have worked for cell phones. It’s not going to work for something like CRISPR or smart blood. That’s because, unlike cell phones, the poorest and the impoverished are the ones most in need of these tools. They’re also the ones that stand to benefit most, in terms of quality of life.

Historically speaking, though, the government has not treated the poor and impoverished very well. Use the same approach with cell phones and the rich and well-connected will be the only ones to benefit. They’ll also further widen the gap, so much so that they might be even less inclined to share.

That’s why the default answer to this question can’t just be the government or rich business interests. I’m not going to pretend to know who the authority will be or how they’ll even go about distributing these advances to people in a fair and just manner. I just know that our current method will not be sufficient.


Uncomfortable Question #2: How Do We Stop Certain Human Enhancements When They Go Wrong?

When your computer freezes, you reboot it. When the sound on your speakers starts making noises, you turn it off. It’s a beautiful, but underrated thing, having an off-switch. I’m sure we’ve all had people in our lives whom we wish had an off-switch. It’s a necessary fail-safe for a chaotic world that we can’t always manage.

Putting an off-switch on dangerous technology, especially something like artificial intelligence, is just common sense. It would’ve made “The Terminator” a lot shorter and a lot less confusing. With other advancements, especially those involving CRISPR and biotechnology, it’s not as easy as just installing an extra switch.

How do you turn off something that literally rewrites our DNA? How do you stop someone who has grown used to having superhuman abilities, by our standards? That’s akin to asking someone to make themselves sick or hack off a limb because the technology has some side-effects. That’s going to be a tough sell.

Again, I am not smart enough to imagine how a fail-safe for that sort of thing would work. It can’t just rely on blind faith, magical thinking, or whatever other tactic that used car salesmen exploit. It has to be in place and up to speed as soon as this technology goes live.


Uncomfortable Question #3: How Independent/Dependent Will Human Enhancement Make Us?

Smartphones, running water, and free internet porn are great. However, they do require infrastructure. People today are at the mercy of whoever pays their cell phone bill, whoever knows the wifi password, and whoever can stop their toilets from overflowing with shit. To some extent, we all depend on certain institutions to keep our lives and our society going.

In a future of enhanced humans, who have been imbued with traits and abilities that way beyond the scope of our current infrastructure, how dependent or independent can they be in the grand scheme of things?

If they rely on a regular injection of nanobots or need to recharge every other day, then they’re going to have to rely on some form of infrastructure. That may help keep enhanced humans from becoming super-powered Biff Tannens, but it will also give a lot of power to whoever or whatever is supplying those resources.

In a sense, it can’t be one or the other. If enhanced humans are too independent, then they have no reason to interact or aid one another. If they’re too dependent on certain resources, then those controlling those resources become too powerful. There needs to be a healthy balance, is what I’m saying. There will be costs, but we have to make sure that the benefits far outweigh those costs.


Uncomfortable Question #4: How Much Of Our Humanity Do We Keep?

Let’s not lie to ourselves. There’s a lot about the human condition we wish we could change or drop altogether. Personally, I would love to never have to go to the dentist, never have to clip my toe nails, and never have to sleep, which is an advancement that’s closer than you think.

Humanity has has a lot of flaws, which is a big part of what drives the development of these tools. However, there are certain parts about humanity that are worth preserving and I’m not just talking about the health benefits of orgasms. Change too much about our bodies, our minds, and everything in between and we cease to become human. At that point, why even care about other humans?

Maintaining a sense of humanity is what will separate enhanced humans from overpriced machines. Our sense of humanity is a big part of what drives us to live, love, explore, and understand. If we lose that, then we’re basically a very smart rock that’s only interested in maintaining its status as a rock.

To really expand our horizons, we need to preserve the best of humanity. Humans do amazing things all the time that reminds us why humanity is worth preserving. When we start enhancing ourselves, we need to save those traits, no matter what we become.


Uncomfortable Question #5: How Will Society Function In A World Of Enhanced Humans?

We’ve built a good chunk of our society around our inherent flaws, as humans. We form tribes to cooperate and survive in ways we can’t do on our own. We seek leaders who are capable of guiding us to functional, stable society. Granted, sometimes those efforts fail miserably, but the goal is the same.

With human enhancement, the rules aren’t just different. They’re obsolete. So much of our society is built around the idea that we’re still a bunch of cavemen with fancier tools that we really don’t have a concept of how we’ll function beyond that context. We have nation states, national identities, and various tribes to which we bind ourselves.

Those are all still products of our inherent drive towards tribalism. That’s still our default setting, as a species. What happens when we start tweaking those settings? Will things like nation states, government, and social circles even exist? When society is made up of a bunch of superhuman beings who can live forever and never need a sick day, how do we even go about functioning?

This is well-beyond my expertise, as an aspiring erotica/romance writer. It may be one of those things we can’t contemplate until after some of these advances take hold. At the very least, we need to put this question at the top of our to-do list when that time comes.


Uncomfortable Question #6: How Will Human Enhancement Affect Our Understanding Of Family And Love?

This is probably the most pressing question for me, as an aspiring erotica/romance writer. I’ve already highlighted some of the flaws in our understanding of love. Once humanity starts enhancing itself, it may either subvert those flaws or render them obsolete. In the process, though, it may create an entirely new class of flaws to deal with.

What happens to a marriage when the people involved live forever and don’t age? That whole “death do us part” suddenly becomes an issue. What happens when having children is essentially uncoupled from romance, through tools like artificial wombs? What will love even feel like once we start enhancing our brains along with our genitals?

Since all love and passion still starts in the brain, which we’re already trying to enhance, any level of human enhancement will necessarily affect love, marriage, and family. Chances are it’ll take on a very different meaning in a world where marriage is less about tax benefits and more about new forms of social dynamics.

Human enhancement will change a lot about our bodies, our minds, and our genitals. It’ll effect so much more, including how we go about love and family. It’s still impossible to grasp since we’re all still stuck with our caveman brains. However, once that changes, this is just one of many issues we should contemplate if we’re to make the future better, sexier, and more passionate.

Leave a comment

Filed under Sexy Future, Thought Experiment

How Do We Fix The Human Body (Namely The Sexy Parts)?

I’ve done a lot of complaining these past few weeks. I admit it. I’ve covered some pretty depressing shit recently. There’s just no way to put a positive spin on the gross injustices of gender inequality or how double standards make men and women hate each other way more than we should. I’m not making anyone’s panties moist by talking about this stuff. I know that. That’s why I’m hoping this gets everyone back in the mood.

I feel I’ve already gotten my point across. The human race is an amazing, but imperfect species. Sure, we do have some pretty awesome hardware, as the health benefits to orgasms so wonderfully demonstrate. Anyone who watched the Olympics this year knows we can do some pretty awesome things with that hardware.

That said, it’s still full of bugs, flaws, and imperfections. The human body, as it is right now, is like the first version of Windows. For those of you too young to remember the days when we couldn’t download a billion pictures of tits on our phones, this is what that version looked like.

It’s old. It’s dated. It’s not very sexy anymore. There’s a damn good reason why it gets upgraded and updated all the time. Sure, those upgrades aren’t always the best, as anyone who used Windows Vista for more than five minutes knows all too well. At least future upgrades can fix the crap that the last upgrade created.

We take those upgrades for granted because the human body doesn’t get upgraded. We’re born into one body. We’re stuck with that body. The chemistry within that body is prone to all sorts of flaws. According to the National Human Genome Research Institute, there are approximately 6,800 rare genetic diseases that we know about. There’s a real possibility there are a few that we don’t know about and won’t know about until someone suffers horribly from it.

Those are a lot of flaws and for most of human history, we couldn’t do squat about them. Our best bet was to just hope we weren’t born with any of these flaws and those are some pretty long odds. As someone who regularly loses at poker, I know all too well how long odds tend to screw people over and not just with money.

We’re lucky to live in an age where some conditions can be managed, but even if we’re lucky enough to be born with these flaws, there’s still the matter of upkeep. I’m not just talking about staying in shape, avoiding hungry bears, and not eating things that’ll make you violently throw up your internal organs. Even when we give the human body everything it needs, it’s still woefully inefficient at making use of those needs. Hell, it’s downright lazy at times.

The human body, like the bodies of all animals, is designed for two things: survival and reproduction. That body evolved on the African savanna and while it has proven adaptable to many environments and conditions, it’s still a crude piece of hardware that’s easily broken, easily wounded, and breaks down over time.

The human body is not like a car. Sure, we can spray on a tan, even if it makes us look like an old baseball glove. Sure, we can get plastic surgery and expensive skin treatments, even if they make us look like glorified crash test dummies with wigs. The human body still breaks down. It still ages and dies.

In fact, the rule of thumb in biology is that once your body stops growing, it starts dying. From an evolutionary perspective, it has to. Individuals of a species have to die in order to free up resources from the young and the growing who aren’t as close to dying. Is it harsh and crude? Yes, it most certainly is. However, it’s a process that has kept life going for a couple billion years on this planet so it must be doing something right.

Beyond its limited survival abilities, even the good parts are limited. The average male orgasm lasts only seven seconds. The average female orgasms lasts around twenty. Granted, those brief moments are a lot of fun, but who among us hasn’t longed for a longer stay in O-Town?

By that same token, who among us hasn’t longed for a body that doesn’t bloat up after eating a tub of ice cream? Who hasn’t longed for a body that doesn’t need parts of it shaved every other day? Who hasn’t longed for a body that isn’t hung over after drunk karaoke night at the bar? Who, I ask?

Clearly, there are many aspects about our bodies that we would like to improve. Fixing these flaws in the body is a critical step in fixing the flaws that emerge between people in general. Think about it. If you had the body of an Olympian and could eat chocolate-covered bacon every day without getting sick, would you be able to hate anyone? I think not.

It all seems like a fantasy, but it’s one I’ve explored before. In my book, “Skin Deep,” I explored the concept of changing your body, becoming as beautiful as you wanted to be and indulging in all the decadent pleasures you wanted. That’s every supermodel, actor, and professional athlete’s dream. You think Brett Favre would’ve retired if he had that option? I think not.

While the mechanisms I used in this book were fantasy, complete with technobabble refined from years of comic book and Star Wars jargon, the concept is not entirely fantasy. There actually is some science behind the inspiration to this story that was full of sex, romance, and teen melodrama.

How is this possible? Well, keep in mind that there is one powerful trait that sets humsn apart from the chimps, spiders, and pond scum of this planet. We build things. We build very elaborate things. I’m not talking about sticks and spears. I’m talking about big ass buildings in the middle of the desert.

The ability to build shit that helps us survive is a big reason why the human race is the dominant species on this planet. Beyond the big buildings, we also build amazing medical devices like CRISPR, which is to genetic engineering what a wrench is to a mechanic.

We are rapidly advancing to a time when we’ll have to tools to finally give our bodies the upgrade it needs. That raises all sorts of fundamental ethical and moral questions, but those questions aren’t very sexy so I won’t be touching on them too much.

Instead, I’m going to ask a simple question that I hope to answer in multiple posts. How would you upgrade the human body? Specifically, how would you upgrade it in a way that would allow us to be more intimate, more loving, and more understanding of one another?

There are all sorts of crazy enhancements that some would pursue just for the hell of it. I’m sure there are some people out there who would enhance their bodies in ways that would help them get the hell away from people in general. Sheldon Cooper of “The Big Bang Theory” comes to mind, but he’s an unapologetic asshole so this little thought experiment doesn’t apply to him.

For most of the non-Sheldon Cooper population of this planet, we’re an affectionate people who seeks intimacy with others. Our bodies provide some mechanisms for that, but those mechanisms are woefully limited by the forces of biology, evolution, and all the petty, unjust bullshit we tack onto it. There has to be a better way, right?

So before I start talking about specifics, think about this question. Think about how you seek love, intimacy, and understanding right now. How would you make that better? How will future tools make that better? Would you want orgasms to last longer? Would you want to share your thoughts with someone? Would you want to improve your senses so you can hear, touch, smell, and see your lover more clearly? What would you do?

Think about it and don’t shy away from the sexy implications. Hell, embrace them! These are implications that I want to explore as an erotica/romance writer and I think they’re worth exploring. At some point, the future will catch up to us and it will affect us. If we’re not ready, we’ll miss out on the sexiness and who wants to miss out on that?

11 Comments

Filed under Jack Fisher's Insights