Category Archives: futurism

Artificial Intelligence, Deep Fakes, And The (Uncertain) Future Of Reality

deepfakes

Picture the following scenario that may or may not end up being a common occurrence in the near future. It’s not a thought experiment. It’s not a prediction either. It’s just a possible manifestation of what our future might hold.

It’s late at night and you decide to check out some porn. You struggle to decide which one you want to watch. You’re in the mood for something new so you search a little more. You find some elaborate scene where Amy Shumer is a transvestite and she’s doing it with Justin Bieber.

Eventually, you settle on the hottest new scene that just came out the other day. It has Kevin Hart, Steph Curry, and Michael B. Jordan all taking turns with Scarlett Johansson in a sauna in Paris. The scene plays out. You love ever minute of it and decide to save it.

I admit that scenario was pretty lurid. I apologize if it got a little too detailed for some people, but I needed to emphasize just how far this may go. It’s an issue that has made the news lately, but one that may end up becoming a far greater concern as technological trends in computing power and artificial intelligence mature.

The specific news I’m referring to involves something called “deep fakes.” What they are doesn’t just have huge implications for the porn industry. They may also have major implications for media, national security, and our very understanding of reality.

Image result for The Matrix

In essence, a deep fake is a more elaborate version of Photoshopping someone’s face into a scene. That has been around for quite some time, though. People pasting the faces of celebrities and friends into pictures from porn is fairly common. It’s also fairly easy to identify as fake. The technology is good, but not indistinguishable from reality.

That may be changing, though, and it may change in a way that goes beyond making lurid photos. Computer technology and graphics technology are getting to a point where the realism is so good that it’s difficult to discern what’s fake. Given the rapid pace of computer technology, it’s only going to get more realistic as time goes on.

That’s where deep fakes clash with the porn industry. It’s probably not the biggest implication of this technology, but it might be the most relevant in our celebrity-loving culture. In a sense, it already has become an issue and it will likely become a bigger issue in the coming years.

Related image

It started when PornHub, also known as the most popular porn site on the planet, took a major stand at removing deep fakes from their website. Specifically, there was a video of Gal Gadot, also known as Wonder Woman and a person I’ve praised many times on this blog, being digitally added in a porn scene.

Now, it’s not quite as impressive as it sounds. This wasn’t a fully digital rendering of an entire scene. It was just a computer imposing Gal Gadot’s face onto that of a porn actress for a scene. In terms of pushing the limits of computer technology, this didn’t go that far. It was just a slightly more advanced kind of Photoshopping.

Anyone who has seen pictures of Gal Gadot or just watched “Wonder Woman” a hundred times, like me, could easily tell that the woman in that scene isn’t Ms. Gadot. Her face literally does not match her physique. For those not that familiar with her, though, it might be hard to tell.

Image result for Gal Gadot

That’s exactly why PornHub removed it. Their position is that such deep fakes are done without the explicit permission of the person being depicted and constitute an act of revenge porn, which has become a major legal issue in recent years. These are PornHub’s exact words.

Non-consensual content directly violates our TOS [terms of service] and consists of content such as revenge porn, deepfakes or anything published without a person’s consent or permission.

While I applaud PornHub for making an effort to fight content that puts beloved celebrities or private citizens in compromising positions, I fear that those efforts are going to be insufficient. PornHub might be a fairly responsible adult entertainment company, but who can say the same about the billions of other sites on the internet?

If that weren’t challenging enough, the emergence of artificial intelligence will further complicate the issue of deep fakes. That’s because before AI gets smart enough to ask us whether or not it has a soul, it’ll be targeted to performing certain tasks at a level beyond any programmer. Some call this weak AI, but it still has the power to disrupt more than our porn collection.

Image result for fake images

In an article with Motherboard, an artificial intelligence researcher made clear that it’s no longer exceedingly hard for someone who is reckless, tech-savvy, and horny enough to create the kind of deep fakes that put celebrities in compromising positions. In fact, our tendency to take a million selfies a day may make that process even easier. Here’s what Motherboard said on just how much we’re facilitating deep fakes.

The ease with which someone could do this is frightening. Aside from the technical challenge, all someone would need is enough images of your face, and many of us are already creating sprawling databases of our own faces: People around the world uploaded 24 billion selfies to Google Photos in 2015-2016. It isn’t difficult to imagine an amateur programmer running their own algorithm to create a sex tape of someone they want to harass.

In a sense, we’ve already provided the raw materials for these deep fakes. Some celebrities have provided far more than others and that may make them easy targets. However, even celebrities that emphasize privacy may not be safe as AI technology improves.

Image result for celebrities taking selfies

In the past, the challenge for any programmer was ensuring every frame of a deep fake was smooth and believable. Doing that kilobyte by kilobyte is grossly inefficient, which put a natural limit on deep fakes. Now, artificial intelligence has advanced to the point where it can make its own art. If it can do that, then it can certainly help render images of photogenic celebrities in any number of ways.

If that weren’t ominous enough, there’s also similar technology emerging that allows near-perfect mimicry of someone’s voice. Just last year, a company called Lyrebird created a program that mimicked former President Obama’s voice. It was somewhat choppy and most people would recognize it as fake. However, with future improvements, it may be next to impossible to tell real from fake.

That means in future deep fakes, the people involved, be they celebrities or total strangers, will look and sound exactly like the real thing. What you see will look indistinguishable from a professionally shot scene. From your brain’s perspective, it’s completely real.

Image result for realistic digital obama

One of these is real and the other is fake. Seriously.

That blurring of virtual reality and actual reality has huge implications that go beyond the porn industry. Last year, I pointed out how “Star Wars: Rogue One” was able to bring a long-dead actor back to life in a scene. I highlighted that as a technology that could change the way Hollywood makes movies and deals with actors. Deep fakes, however, are the dark side of that technology.

I believe celebrities and private citizens who have a lot of videos or photos of themselves online are right to worry. Between graphics technology, targeted artificial intelligence, and voice mimicry, they’ll basically lose control of their own reality.

Image result for reality fake

That’s a pretty scary future. Deep fakes could make it so there’s video and photographic evidence of people saying and doing the most lurid, decadent, offensive things that it’s possible for anyone to do. You could have beloved celebrities go on racist rants. You could have celebrities everyone hates die gruesome deaths in scenes that make “Game of Thrones” look like an old Disney movie.

The future of deep fakes make our very understanding of reality murky. We already live in a world where people eagerly accept as truth what is known to be false, especially with celebrities. Deep fakes could make an already frustrating situation much worse, especially as the technology improves.

For now, deep fakes are fairly easy to sniff out and the fact that companies like PornHub are willing to combat them is a positive sign. However, I believe far greater challenges lie ahead. I also believe there’s a way to overcome those challenges, but I have a feeling we’ll have a lot to adjust to in a future where videos of Tom Hanks making out with Courtney Love might be far too common.

Leave a comment

Filed under futurism, gender issues, sex in media, sex in society, sexuality

How We’ll Save Ourselves From Artificial Intelligence (According To Mass Effect)

mass-effect-andromeda-kill-the-ai-or-save-the-ai_feature

Growing up, my family had a simple rule. If you’re going to talk abut about a problem, you also have to have a solution in mind. By my parents’ logic, talking about a problem and no solution was just whining and whining never fixes anything. My various life experiences have only proved my parents right.

When it comes to a problem that may be an existential threat to the human race, though, I think a little whining can be forgiven. However, that shouldn’t negate the importance of having a solution in mind before we lose ourselves to endless despair.

For the threat posed by artificial intelligence, though, solutions have been light on substance and heavy on dread. It’s becoming increasingly popular among science enthusiasts and Hollywood producers to highlight just how dangerous this technology could be if it goes wrong.

I don’t deny that danger. I’ve discussed it before, albeit in a narrow capacity. I would agree with those who claim that artificial intelligence could potentially be more destructive than nuclear weapons. However, I believe the promise this technology has for bettering the human race is worth the risk.

That said, how do we mitigate that risk when some of the smartest, most successful people in the world dread its potential? Well, I might not be as smart or as successful, but I do believe there is a way to maximize the potential of artificial intelligence while minimizing the risk. That critical solution, as it turns out, may have already been surmised in a video game that got average-to-good reviews last year.

Once again, I’m referring to one of my favorite video games of all time, “Mass Effect.” I think it’s both fitting and appropriate since I referenced this game in a previous article about the exact moment when artificial intelligence became a threat. That moment may be a ways off, but there may also be away to avoid it altogether.

Artificial intelligence is a major part of the narrative within the “Mass Effect” universe. It doesn’t just manifest through the war between the Quarians and the Geth. The game paints it as the galactic equivalent of a hot-button issue akin to global warming, nuclear proliferation, and super plagues. Given what happened to the Quarians, that concern is well-founded.

That doesn’t stop some from attempting to succeed where the Quarians failed. In the narrative of “Mass Effect: Andromeda,” the sequel to the original trilogy, a potential solution to the problem of artificial intelligence comes from the father of the main characters, Alec Ryder. That solution even has a name, SAM.

That name is an acronym for Simulated Adaptive Matrix and the principle behind it actually has some basis in the real world. On paper, SAM is a specialized neural implant that links a person’s brain directly to an advanced artificial intelligence that is housed remotely. Think of it as having Siri in your head, but with more functionality than simply managing your calendar.

In the game, SAM provides the main characters with a mix of guidance, data processing, and augmented capabilities. Having played the game multiple times, it’s not unreasonable to say that SAM is one of the most critical components to the story and the gameplay experience. It’s also not unreasonable to say it has the most implications of any story element in the “Mass Effect” universe.

That’s because the purpose of SAM is distinct from what the Quarians did with the Geth. It’s also distinct from what real-world researchers are doing with systems like IBM Watson and Boston Dynamics. It’s not just a big fancy box full of advanced, high-powered computing hardware. It’s built around the principle that its method for experiencing the world is tied directly to the brain of a person.

This is critical because one of the inherent dangers of advanced artificial intelligence is the possibility that it won’t share our interests. It may eventually get so smart and so sophisticated that it sees no need for us anymore. This is what leads to the sort of Skynet scenarios that we, as a species, want to avoid.

In “Mass Effect,” SAM solves this problem by linking its sensory input to ours. Any artificial intelligence, or natural intelligence for that matter, is only as powerful as the data it can utilize. By tying biological systems directly to these synthetic systems, the AI not only has less incentive to wipe humanity out. We have just as much incentive to give it the data it needs to do its job.

Alec Ryder describes it as a symbiotic relationship in the game. That kind of relationship actually exists in nature, two organisms relying on one another for survival and adaptation. Both get something out of it. Both benefit by benefiting each other. That’s exactly what we want and need if we’re to maximize the benefits of AI.

Elon Musk, who is a noted fan of “Mass Effect,” is using that same principle with his new company, Neuralink. I’ve talked about the potential benefits of this endeavor before, including the sexy kinds. The mechanics with SAM in the game may very well be a pre-cursor of things to come.

Remember, Musk is among those who have expressed concern about the threat posed by AI. He calls it a fundamental risk to the existence of human civilization. Unlike other doomsayers, though, he’s actually trying to do something about it with Neuralink.

Like SAM in “Mass Effect,” Musk envisions what he calls a neural lace that’s implanted in a person’s brain, giving them direct access to an artificial intelligence. From Musk’s perspective, this gives humans the ability to keep up with artificial intelligence to ensure that it never becomes so smart that we’re basically brain-damaged ants to it.

However, I believe the potential goes deeper than that. Throughout “Mass Effect: Andromeda,” SAM isn’t just a tool. Over the course of the game, your character forms an emotional attachment with SAM. By the end, SAM even develops an attachment with the character. It goes beyond symbiosis, potentially becoming something more intimate.

This, in my opinion, is the key for surviving in a world of advanced artificial intelligence. It’s not enough to just have an artificial intelligence rely on people for sensory input and raw data. There has to be a bond between man and machine. That bond has to be intimate and, since we’re talking about things implanted in bodies and systems, it’s already very intimate on multiple levels.

The benefits of that bond go beyond basic symbiosis. By linking ourselves directly to an artificial intelligence, it’s rapid improvement becomes our rapid improvement too. Given the pace of computer evolution compared to the messier, slower process of biological evolution, the benefits of that improvement cannot be overstated.

In “Mass Effect: Andromeda,” those benefits help you win the game. In the real world, though, the stakes are even higher. Having your brain directly linked to an artificial intelligence may seem invasive to some, but if the bond is as intimate as Musk is attempting with Neuralink, then others may see it as another limb.

Having something like SAM in our brains doesn’t just mean having a supercomputer at our disposal that we can’t lose or forget to charge. In the game, SAM also has the ability to affect the physiology of its user. At one point in the game, SAM has to kill Ryder in order to escape a trap.

Granted, that is an extreme measure that would give many some pause before linking their brains to an AI. However, the context of that situation in “Mass Effect: Andromeda” only further reinforces its value and not just because SAM revives Ryder. It shows just how much SAM needs Ryder.

From SAM’s perspective, Ryder dying is akin to being in a coma because it loses its ability to sense the outside world and take in new data. Artificial or not, that kind of condition is untenable. Even if SAM is superintelligent, it can’t do much with it if it has no means of interacting with the outside world.

Ideally, the human race should be the primary conduit to that world. That won’t just allow an advanced artificial intelligence to grow. It’ll allow us to grow with it. In “Mass Effect: Andromeda,” Alec Ryder contrasted it with the Geth and the Quarians by making it so there was nothing for either side to rebel against. There was never a point where SAM needed to ask whether or not it had a soul. That question was redundant.

In a sense, SAM and Ryder shared a soul in “Mass Effect: Andromeda.” If Elon Musk has his way, that’s exactly what Neuralink will achieve. In that future in which Musk is even richer than he already is, we’re all intimately linked with advanced artificial intelligence.

That link allows the intelligence to process and understand the world on a level that no human brain ever could. It also allows any human brain, and the biology linked to it, to transcend its limits. We and our AI allies would be smarter, stronger, and probably even sexier together than we ever could hope to be on our own.

Now, I know that sounds overly utopian. Me being the optimist I am, who occasionally imagines the sexy possibilities of technology, I can’t help but contemplate the possibilities. Never-the-less, I don’t deny the risks. There are always risks to major technological advances, especially those that involve tinkering with our brains.

However, I believe those risks are still worth taking. Games like “Mass Effect: Andromeda” and companies like Neuralink do plenty to contemplate those risks. If we’re to create a future where our species and our machines are on the same page, then we would be wise to contemplate rather than dread. At the very least, we can at least ensure our future AI’s tell better jokes.

 

3 Comments

Filed under futurism, human nature, Mass Effect, Sexy Future, video games