Tag Archives: Neuralink

Prison Or Brain Hacking? A Choice That May Shape Our Future

fajb_brain_hacking_01_sep2012

How does a civilized society deal with its least civilized individuals? This is a question that every society has had to answer, going back to the hunter/gatherer era. We live in an imperfect world full of imperfect individuals. Some are more imperfect than others, so much so that it’s not always possible to reform them into functional members of society.

Most people who commit crimes are not monsters, nor are they sadists who get their joy by torturing the innocent. A vast majority are just people who find themselves in bad situations where they make wrong decisions, exercise poor judgment, or lack impulse control. For these people, fines and brief imprisonment are usually sufficient.

For those who become career criminals, neither respecting the law nor seeking to abide by it, the current system is woefully inadequate. It’s part of the reason why criminal justice reform has become a hot topic. We’re finally learning that throwing people into prisons where they’re dehumanized, degraded, and tortured doesn’t help them become productive members of society. Go figure.

There’s plenty of room for improvement. Some countries have demonstrated that there are more effective, more humane ways to treat criminals. However, even those systems have their limits. As long as human beings remain an imperfect species, we’ll still have to deal with these deviant, violent, and inherently dangerous individuals.

For the moment, our options for dealing with these people are few. It primarily involves incarceration or intense therapy, often coupled with drug therapy. While this can be helpful to some, there are severe limitations. Some individuals don’t even want treatment and even those who are caught don’t always respond.

With that in mind, allow me to present a not-quite-hypothetical scenario. What if, instead of prison or therapy, we gave offending criminals an option to undergo an invasive treatment that affects the primary source of their deviant behavior in the brain? Jail is still an option for those who aren’t keen on messing with their brain wiring, but for certain people, an alternative is an alternative.

What I just described is one of those concepts in which the science is there, but the technology and the courts haven’t caught up to it. I know whenever I talk about emerging technology, be it sex robots or artificial wombs, I venture pretty far into speculation territory. Some of these advances rely on science and tools that don’t yet exist. This isn’t one of those cases.

In July 2018, the Journal of Neuroscience published a study revealing that targeted stimulation of the prefrontal cortex reduced aggressive tendencies in test subjects. Before you start getting fever dreams of mad scientists strapping people to gurneys and sticking wires in their ears, you can rest easy. This isn’t the kind of electroshock treatment that find their way into one too many horror movies.

These treatments have ground-breaking implications. They prove that it’s possible to temper or mitigate certain behaviors in people. The study doesn’t specify the limits of the effects or if it can be applied to something other than aggressive behaviors. It’s still a proof of concept and one that could compound the impact of other emerging technologies.

We already have tools like CRISPR that allow us to tweak our genes. We also have companies like Neuralink that are actively working on implants that could fix, augment, or expand our brain capacity. While men like Elon Musk and Ray Kurzweil often discuss these advances within the context of keeping humanity on pace with artificial intelligence, there will likely be some interim uses for these technologies.

Tempering violent behavior in people with significant cognitive impairments is just one possible use, but one that has the potential to change how we think about crime and punishment. Think back to those people I mentioned earlier who just inherently violent. They can’t manage their emotions or control their anger. They don’t think before they act and some don’t even feel guilty about what they do.

Like it or not, these people exist. I’ve known people in my life who have terrible impulse control and fly into a rage over the smallest things. Some of those people have had issues with the law and I often see in them a sense of never-ending frustration. Many don’t like that they have these issues. A few have tried to get help, but it doesn’t always work.

I suspect that if some of those people were given a chance to treat their tendencies with targeted shock therapy or a brain implant, they would jump at the chance. Deviant tendencies aside, they seek some level of function in their lives. If tweaking their brain is the difference between prison and freedom, then they’ll take that risk.

Turning people who might have been unrepentant psychopaths into productive, non-violent members of society is an objective good. The technology to do just that is not that far off and more study could help us refine the process, so much so that prison might be less necessary in certain cases. Given how expensive it is to imprison people, it’s an alternative worth pursuing.

Along with that undeniable good, however, there are plenty of potential dangers. Anyone who has ever seen one too many psychological thrillers or just read “One Flew Over The Cuckoo’s Nest,” can easily imagine how this kind of technology could be abused.

Tempering someone’s violent behaviors is all well and good, but why would it stop there? The brain is capable of all sorts of behaviors, deviant and otherwise. Say a society determines that other non-violent behaviors, be it sexual promiscuity or binge-watching Netflix for too many hours, are not socially desirable. What’s to stop them from imposing this on their citizens?

Some countries probably already fantasize about technologies that enable them to directly pacify their citizens, rendering them weak, passive, and easily manipulated. In his famous novel, “1984,” George Orwell called these people proles. However, in the book, the deviants had to be tortured and re-educated. If Big Brother had access to this technology, it would be a simple medical procedure.

That has plenty of terrifying possibilities for abuse. What if someone uses brain stimulation to prevent people from having homosexual urges? What if someone uses it to treat those who identify as transgender? There’s no evidence that the techniques in the study would work on that, but there’s no evidence to say it’s impossible.

Its use will definitely be controversial. That much, I’m certain of. It’s not advanced enough to become a legitimate treatment for anything. At the moment, direct brain stimulation is utilized for a specified set of conditions and it’s often a last resort. Using it on healthy people who just want to cull their violent urges is uncharted territory.

Whether it enters the picture for criminal justice reform is anyone’s guess, but if the process works, someone who has dealt with one too many repeat offenders will try to use it. From there, the precedent will be set. It’s hard to say what form it’ll take, but it’ll take society into uncharted territory with respect to controlling our minds.

Perhaps, at first, the process would be voluntary and only be presented in conjunction with jail or some other treatment. It’s also possible that the courts will determine a strict set of criteria for when the state could force this treatment onto someone. There are probably a few repressive governments who would try to use this on an industrial scale. I won’t say they’re names, but most people know who they are.

Like any emerging technology, there are risks and rewards worth considering. We stand to benefit greatly by having a society with as few violent individuals as possible. We also stand to lose a great deal if we allow misguided authority figures to determine how we use this technology.

I’m not qualified to determine whether or not someone should have their brain hacked. I don’t know that anyone is. However, I also don’t deny that the human brain, as magnificent as it is, has plenty of flaws. We should go about fixing those flaws, especially in people who are disproportionately impacted by them. We just have to be very careful about how we manage it.

Leave a comment

Filed under futurism, human nature, psychology, sex in society, Sexy Future, Thought Experiment

How Advanced AI Will Create Figurative (And Literal) Magic

761-superintelligent-ai

If you went back 50 years and showed someone your smartphone, chances are they would be amazed. To them, such technology would seem downright alien. However, they probably wouldn’t think it was magic. Go back 500 years, though, and chances are they will think a smartphone is magic, miraculous, or a tool of the devil.

Just look at what a smartphone does and compare it to the magic of old. You can ask it a question and, depending on how well-worded it is, it’ll give you an answer. If you ask it to make food, clothes, or tools appear, it’ll make that happen too. Thanks to services like Amazon and Grubhub, this isn’t magic to most people. In fact, it’s downright mundane.

Granted, these things won’t appear instantly out of thin air, but depending on your willingness to pay for quicker shipping, it will get there. By medieval standards, that’s basically sorcery.

You don’t have too far back in time to appreciate the magic of modern technology. Most of us don’t understand how it works. We don’t know what makes the screens on our phones light up when we push a button or how our car moves when we press the accelerator. We understand that there’s science behind it and it’s not magic. It just feels like it from a certain perspective.

Famed science fiction author, Arthur C. Clarke, once said that magic is just science we don’t understand. It was one of the three laws he used in contemplating the future. Time and a host of amazing advances have proven the validity of this sentiment. We’ve created materials once thought to be impossible. We’ve uncovered phenomenon that seem to undermine our understanding of physics.

This is to be expected because our understanding of the universe is incomplete. We have some pretty solid theories so far, but there’s still a lot we don’t understand. As we learn more, some of the things we discover may seem magical. Even in a world that is more educated than it has been at any point in human history, there may still be forces that our primate brains just can’t make sense of.

To some extent, it helps that humanity is making these discoveries through their collective effort. It helps us accept a seemingly-impossible idea if it comes from a member of the same species. What happens, though, when we gain knowledge from something that is both not human and many times smarter than the entire human race? Will it seem like magic to us?

I argue that it would. I would also argue that we’ll be seeing this kind of magic sooner than you think. It won’t come from some enigmatic sorcerer with a thick beard, a white robe, and an uncanny resemblance to Ian McKellen. It’ll likely come from the world of advanced artificial intelligence.

In the past, whenever I’ve talked about advanced artificial intelligence, I’ve focused on its potential to fundamentally change our civilization and what it means to be human. I haven’t touched on how it might work, mostly because I’m not nearly smart enough to make sense of it. However, that gives me more in common with the experts than you think.

In the emerging, but rapidly growing field, of artificial intelligence, there’s a strange phenomenon known as black box AI. Simply put, this when we understand the data that goes in and comes out of an AI system. We just don’t know how it went about processing that data. It’s like putting a slab of meat in an oven, pressing a button, and getting a Big Mac without knowing how it was made.

It’s not quite magic, but it’s a manifestation of Arthur C. Clarke’s ideas on science and magic. AI systems today are advancing at a pace that we can’t hope to keep up with. We already have systems that can surpass any human in terms of Jeopardy, chess, and Go. We don’t yet have a system that has the same intellectual capacity of an adult human, but most experts believe we’re well on our way to achieving that.

When that day comes, we may very well have an AI that does more than just process data in ways we don’t understand. Once an AI is capable of matching or exceeding the intellectual capacity of an average human, then it’s likely the black box phenomenon will become more pronounced.

Imagine, for a moment, we had an AI that was smarter than even the smartest human beings on the planet. We go to that AI, we feed it every gigabyte of data we have on human biology, and ask it to surmise a cure for cancer. It takes only a few seconds to process all that data. Then, it spits out the formula for something that has eluded generations of doctors with ease.

We don’t know what form it may take. We may not even fully understand the components of it. However, it still works. From our perspective, it’s akin to a magical healing elixir straight from the world of Tolkein. We assume there’s some sort of science behind it, but we’re utterly baffled by the specifics. We just know it works.

It goes beyond medicine, as well. With an even more advanced AI, we could feed it every one of our most advanced theories about physics, biology, chemistry, and cosmology. We could then ask it to fill in all the gaps. Again, it gives us an answer and suddenly, we have a Theory of Everything.

We probably won’t understand the details. We may find out that we were dead wrong about particle physics, cosmology, or why our cell phone can’t hold a decent charge anymore. The knowledge such a system gives us could end up being so advanced that we literally do not have the intellectual capacity to understand it. It would be like an ant trying to do calculus.

In the same way a magnifying glass must seem like magic to an ant, the knowledge an advanced AI gives us may seem just as extraordinary. That’s especially true if we give that AI access to a 3D printer, a molecular assembler, or anything it could use to actually craft something.

That could be especially dangerous. For all we know, a sufficiently advanced AI could take a stack of dirty dishes and turn it into a nuclear bomb. We would have no idea how it would work. It would, for all intents and purposes, seem like magic to us. This thing would be doing something that our brains and senses tell us is impossible.

As the AI gets more advanced, it’s abilities and feats become more magical. At that point, it’ll be harder to accept that what it does counts as science. These advances are no longer coming from the efforts of people. They’re coming from a machine that’s millions, if not billions, of times smarter than any ordinary human could ever hope to be. How could it not magical from that perspective?

Throughout human history, sane and competent people have believed in magical things. Not long ago, people believed they could talk to the dead. Sir Isaac Newton believed in alchemy. Alfred Russel Wallace believed in literal spirits. Despite these beliefs, there was an important context to all these perspectives.

They emerged out of our collective ignorance of the world around us. We had nothing but our brains and our senses to make sense of it all. Since both can be easily fooled, as any competent street magician will attest, it doesn’t take much to get people to assume magic. An artificial intelligence would circumvent that context because it has something better than magic.

An advanced AI is not bound by the same senses that constantly fool ordinary humans. It doesn’t even have to misdirect or trick us. It only has to show us ideas and concepts that are completely real, but totally incomprehensible. The entire human race could spend a million years trying to understand it and it still wouldn’t be enough. It would still seem like magic to us.

That notion seems scary on paper and more than a few people have voiced concerns about this. However, all that magical thinking will only occur if our human brains remain unchanged and unenhanced. That’s not likely to be the case. Between the emergence of neural implants and the ongoing development of brain/machine interface, we’ll find a way to keep up with AI. If we want to survive as a species, we’ll have to.

Even if we do somehow keep up, there may still be aspects of advanced AI that seem like magic to us. That may always be the case, so long as we retain part of our caveman brains. Personally, I don’t think that’s a bad thing. No matter how smart or advanced we get, it helps to see a little magic in the world. With advanced AI, though, the rules for magic are bound to change, among many other things.

Leave a comment

Filed under Artificial Intelligence, futurism

The (Distant) Future Of Marvel, Disney, And Entertainment

unnamed

I like to talk about the future. I don’t claim to have any special insight, but I suspect I give it more thought than most. I also believe I tend to think farther into the future than most. Whether it’s contemplating the future of how we’ll organize our society or how our sex lives will evolve, I try to contemplate possibilities beyond the next iPhone upgrade.

One aspect of the distant future that concerns me has to do with boredom, namely how it may become a much larger problem and how we’ll go about alleviating it. I’ve done plenty to argue that boredom can be a dangerous force, from creating immortal super-villains to subverting the very concept of Hell. If our future is to be stable, prosperous, and fun, we’ll need some form of entertainment.

With that critical goal in mind, I’d like to speculate on a potential brand of future entertainment that ties directly with the industry that we know today. Specifically, I’d like to imagine how big entertainment companies like Disney will continue to function in world where advanced artificial intelligence, brain implants, and near-universal access to the internet is a thing.

I feel the time is right to think about such things because just last week, Disney radically altered the entertainment industry by purchasing Fox. Beyond just getting the X-men and Fantastic Four rights back for Marvel, Disney bought a massive library of intellectual property that is potentially worth billions. Being a successful business with shareholders, and all, they’re going to want to make billions more.

How exactly are they going to go about that, though? That’s a question worth asking because the answer for the near future is probably not going to work for the distant future. Sure, Disney will probably rake in plenty of profits at the box office, just as they’ve done with Pixar, Marvel, and Star Wars. However, the movie and toy industry can only go so far.

While box office revenue is up, actual ticket sales are way down. More people are opting to stream their content directly, bypassing pay TV and theaters entirely. The same is true for print media, including comic books. Even toy sales are in decline. This is not good for a company like Disney, which has built its empire on media and merchandise.

That’s not to say things are dire. Disney has been around for almost 100 years. In that time, it has adapted through plenty of upheavals. If it’s going to survive another 100 years, though, it’ll have to adapt to a radically different landscape. Buying Fox is likely part of that process. Disney has already made clear that it plans to start a streaming service to compete with Netflix and Amazon.

That’s a good start, but a streaming service is probably not going to be enough, especially in a future where people live longer, work less, and can share more than just text messages with one another. If Disney wants to continue being at the forefront of entertainment, it’ll have to innovate in ways that leverage future technology in new ways.

After the purchase of Fox, though, Disney may actually be in the best possible position compared to every other entertainment company that exists today. That’s because, unlike its competitors, it has a wealth of intellectual property that it owns outright. From Micky Mouse to Marvel heroes, the library of Disney-owned characters is truly staggering.

In the past, this gave Disney the ability to make or license movies, toys, and games for billions. In the future, those mediums won’t be nearly as profitable, but not because those things will fall out of style. I believe that for Disney to make more billions, it’ll utilize its intellectual property in a very different way, one that will likely require an entirely new approach to entertainment.

Think, for a moment, about the current experience you get from a movie theater, a TV show, or even a life show. You sit in a seat and you just watch. You take in the sights and sounds. If done right, it creates a spectacle that you enjoy. However, the fact that the spectacle only utilizes major senses is somewhat limiting.

What if, instead, you weren’t just an audience member sitting in a seat? What if it actually felt like you were there? What if you felt like you were standing next to Captain America as he battled the Red Skull? What if you felt like you were there when Micky Mouse, Donald Duck, and Goofy all broke out into a joyous musical number?

I’m not just talking about better animation or virtual reality. I’m talking about a form of entertainment that makes your brain actually feel as though you’re experiencing something. It’s not quite like the holodeck on “Star Trek.” It’s more like plugging into “The Matrix,” but for reasons other than learning Kung Fu or having existential breakdowns.

Unlike “The Matrix,” though, you wouldn’t be the catalyst for the story. That’s something Disney would take care of, providing only the world and the vast array of sensations that come with it. Instead of paying for a movie ticket, you pay for an experience that lets you interact or feel part of a story involving Iron Man, Micky, or Buzz Lightyear.

That will likely be the most valuable resource of future entertainment, powerful experiences that give customers the rush and fulfillment of being there. Instead of going to a theater or theme park, they would just plug something into their brains, possibly through an implant like the ones Elon Musk is developing with Neuralink. From there, the experience will be directly streamed right into their brain.

It may sound invasive, but we already share so much of ourselves online, from what we had for lunch to the most intimate aspects of our personal lives. We’re already in the early stages of merging our technology. We already see our smartphones as integral parts of our lives. Why wouldn’t we do the same for brain implants?

Unlike a smartphone, a machine/brain interface can’t be dropped into the toilet or left behind by accident. That same interface won’t just augment the ability of our brains to access the entire wealth of human knowledge. They’ll allow us to directly stimulate the areas that forge our entire perception of the world around us.

This has huge implications, some more profound than others. For companies like Disney, though, that link will be critical with respect to maintaining its place as a dominant entertainment company. People already pay for powerful experience, be they movies, video games, or a full-body massage at a spa. Disney could simply cut out the middle-men while leveraging its vast library of intellectual property.

Sure, in the future, you could probably pay for fancy experiences like those offered in “Total Recall.” However, if you want an experience that allows you to be a Jedi, an Avenger, or a singing animal, you’ll have to go through Disney and they’ll be happy to sell you that experience for a price.

Every week, you’ll be able to select from a range of intense experiences the same way you navigate your Netflix queue. For some, you don’t need to leave your bed. You just plug a device into your brain and let it go from there. For others, maybe you travel to special venues that function like the holodecks in “Star Trek.” There, you could share the experience with others, making it a communal experience.

Disney would still likely need content-creators to craft those experiences. That means people like George Lucas and Kevin Feige will still have a job in this future. The particulars of those jobs would be very different, but the goal would be the same. They would create experiences and stories that people are willing to pay for.

As unpredictable as the future is, it’s still safe to assume that people are going to want entertainment. Wherever there’s a want, there will be a business willing to provide it. There will be competition. There will be billions, if not trillions, to be made in profits. Not every company around today will survive that competition. Disney, however, is already in the best possible position to thrive.

2 Comments

Filed under Artificial Intelligence, futurism, media issues, movies, Sexy Future, War on Boredom

How We’ll Save Ourselves From Artificial Intelligence (According To Mass Effect)

mass-effect-andromeda-kill-the-ai-or-save-the-ai_feature

Growing up, my family had a simple rule. If you’re going to talk abut about a problem, you also have to have a solution in mind. By my parents’ logic, talking about a problem and no solution was just whining and whining never fixes anything. My various life experiences have only proved my parents right.

When it comes to a problem that may be an existential threat to the human race, though, I think a little whining can be forgiven. However, that shouldn’t negate the importance of having a solution in mind before we lose ourselves to endless despair.

For the threat posed by artificial intelligence, though, solutions have been light on substance and heavy on dread. It’s becoming increasingly popular among science enthusiasts and Hollywood producers to highlight just how dangerous this technology could be if it goes wrong.

I don’t deny that danger. I’ve discussed it before, albeit in a narrow capacity. I would agree with those who claim that artificial intelligence could potentially be more destructive than nuclear weapons. However, I believe the promise this technology has for bettering the human race is worth the risk.

That said, how do we mitigate that risk when some of the smartest, most successful people in the world dread its potential? Well, I might not be as smart or as successful, but I do believe there is a way to maximize the potential of artificial intelligence while minimizing the risk. That critical solution, as it turns out, may have already been surmised in a video game that got average-to-good reviews last year.

Once again, I’m referring to one of my favorite video games of all time, “Mass Effect.” I think it’s both fitting and appropriate since I referenced this game in a previous article about the exact moment when artificial intelligence became a threat. That moment may be a ways off, but there may also be away to avoid it altogether.

Artificial intelligence is a major part of the narrative within the “Mass Effect” universe. It doesn’t just manifest through the war between the Quarians and the Geth. The game paints it as the galactic equivalent of a hot-button issue akin to global warming, nuclear proliferation, and super plagues. Given what happened to the Quarians, that concern is well-founded.

That doesn’t stop some from attempting to succeed where the Quarians failed. In the narrative of “Mass Effect: Andromeda,” the sequel to the original trilogy, a potential solution to the problem of artificial intelligence comes from the father of the main characters, Alec Ryder. That solution even has a name, SAM.

That name is an acronym for Simulated Adaptive Matrix and the principle behind it actually has some basis in the real world. On paper, SAM is a specialized neural implant that links a person’s brain directly to an advanced artificial intelligence that is housed remotely. Think of it as having Siri in your head, but with more functionality than simply managing your calendar.

In the game, SAM provides the main characters with a mix of guidance, data processing, and augmented capabilities. Having played the game multiple times, it’s not unreasonable to say that SAM is one of the most critical components to the story and the gameplay experience. It’s also not unreasonable to say it has the most implications of any story element in the “Mass Effect” universe.

That’s because the purpose of SAM is distinct from what the Quarians did with the Geth. It’s also distinct from what real-world researchers are doing with systems like IBM Watson and Boston Dynamics. It’s not just a big fancy box full of advanced, high-powered computing hardware. It’s built around the principle that its method for experiencing the world is tied directly to the brain of a person.

This is critical because one of the inherent dangers of advanced artificial intelligence is the possibility that it won’t share our interests. It may eventually get so smart and so sophisticated that it sees no need for us anymore. This is what leads to the sort of Skynet scenarios that we, as a species, want to avoid.

In “Mass Effect,” SAM solves this problem by linking its sensory input to ours. Any artificial intelligence, or natural intelligence for that matter, is only as powerful as the data it can utilize. By tying biological systems directly to these synthetic systems, the AI not only has less incentive to wipe humanity out. We have just as much incentive to give it the data it needs to do its job.

Alec Ryder describes it as a symbiotic relationship in the game. That kind of relationship actually exists in nature, two organisms relying on one another for survival and adaptation. Both get something out of it. Both benefit by benefiting each other. That’s exactly what we want and need if we’re to maximize the benefits of AI.

Elon Musk, who is a noted fan of “Mass Effect,” is using that same principle with his new company, Neuralink. I’ve talked about the potential benefits of this endeavor before, including the sexy kinds. The mechanics with SAM in the game may very well be a pre-cursor of things to come.

Remember, Musk is among those who have expressed concern about the threat posed by AI. He calls it a fundamental risk to the existence of human civilization. Unlike other doomsayers, though, he’s actually trying to do something about it with Neuralink.

Like SAM in “Mass Effect,” Musk envisions what he calls a neural lace that’s implanted in a person’s brain, giving them direct access to an artificial intelligence. From Musk’s perspective, this gives humans the ability to keep up with artificial intelligence to ensure that it never becomes so smart that we’re basically brain-damaged ants to it.

However, I believe the potential goes deeper than that. Throughout “Mass Effect: Andromeda,” SAM isn’t just a tool. Over the course of the game, your character forms an emotional attachment with SAM. By the end, SAM even develops an attachment with the character. It goes beyond symbiosis, potentially becoming something more intimate.

This, in my opinion, is the key for surviving in a world of advanced artificial intelligence. It’s not enough to just have an artificial intelligence rely on people for sensory input and raw data. There has to be a bond between man and machine. That bond has to be intimate and, since we’re talking about things implanted in bodies and systems, it’s already very intimate on multiple levels.

The benefits of that bond go beyond basic symbiosis. By linking ourselves directly to an artificial intelligence, it’s rapid improvement becomes our rapid improvement too. Given the pace of computer evolution compared to the messier, slower process of biological evolution, the benefits of that improvement cannot be overstated.

In “Mass Effect: Andromeda,” those benefits help you win the game. In the real world, though, the stakes are even higher. Having your brain directly linked to an artificial intelligence may seem invasive to some, but if the bond is as intimate as Musk is attempting with Neuralink, then others may see it as another limb.

Having something like SAM in our brains doesn’t just mean having a supercomputer at our disposal that we can’t lose or forget to charge. In the game, SAM also has the ability to affect the physiology of its user. At one point in the game, SAM has to kill Ryder in order to escape a trap.

Granted, that is an extreme measure that would give many some pause before linking their brains to an AI. However, the context of that situation in “Mass Effect: Andromeda” only further reinforces its value and not just because SAM revives Ryder. It shows just how much SAM needs Ryder.

From SAM’s perspective, Ryder dying is akin to being in a coma because it loses its ability to sense the outside world and take in new data. Artificial or not, that kind of condition is untenable. Even if SAM is superintelligent, it can’t do much with it if it has no means of interacting with the outside world.

Ideally, the human race should be the primary conduit to that world. That won’t just allow an advanced artificial intelligence to grow. It’ll allow us to grow with it. In “Mass Effect: Andromeda,” Alec Ryder contrasted it with the Geth and the Quarians by making it so there was nothing for either side to rebel against. There was never a point where SAM needed to ask whether or not it had a soul. That question was redundant.

In a sense, SAM and Ryder shared a soul in “Mass Effect: Andromeda.” If Elon Musk has his way, that’s exactly what Neuralink will achieve. In that future in which Musk is even richer than he already is, we’re all intimately linked with advanced artificial intelligence.

That link allows the intelligence to process and understand the world on a level that no human brain ever could. It also allows any human brain, and the biology linked to it, to transcend its limits. We and our AI allies would be smarter, stronger, and probably even sexier together than we ever could hope to be on our own.

Now, I know that sounds overly utopian. Me being the optimist I am, who occasionally imagines the sexy possibilities of technology, I can’t help but contemplate the possibilities. Never-the-less, I don’t deny the risks. There are always risks to major technological advances, especially those that involve tinkering with our brains.

However, I believe those risks are still worth taking. Games like “Mass Effect: Andromeda” and companies like Neuralink do plenty to contemplate those risks. If we’re to create a future where our species and our machines are on the same page, then we would be wise to contemplate rather than dread. At the very least, we can at least ensure our future AI’s tell better jokes.

 

5 Comments

Filed under futurism, human nature, Mass Effect, Sexy Future, video games

The (Uncomfortable) Questions We’ll Have To Answer With Human Enhancement

In general, I tend to be optimistic about the future. I know that seems crazy, given our current political climate, but I try to look beyond the petty grievance’s and focus on the bigger picture. By so many measures, the world is getting better. The human race is on an unprecedented winning streak and we’re only getting better.

A great deal of this improvement is due, largely, to our ability to make increasingly amazing tools. As I type this, countless people who are far smarter than I’ll ever be are working on advances that will keep us healthier, make us smarter, and help us transcend our physical and mental limits by orders of magnitude.

This is all exciting stuff. We should all look forward to a future where we never get sick, we never age, and we have the physical and sexual prowess of an Olympic athlete on meth. The aspiring erotica/romance writer in me is giddy with excitement over the sexy possibilities.

Like all advancements, though, there will be a cost. Even the greatest advancements mankind has ever made in science, technology, and sex have come at a cost. It’s just the nature of the chaotic world we live in. Nothing is ever smooth and easy when there are so many chaotic forces that we can’t always make sense of.

That’s why for some of these advancements, such as CRISPR, biotechnology, and artificial intelligence, we have to be extra proactive. We’re not just talking about tools that makes it easier to defend ourselves against a hungry lion. These are tools that will fundamentally change what it means to be human.

They’ll take the caveman logic and tribalism that has guided the human race for its entire existence and throw it out the window. They’ll completely rewrite the rules of human nature, crossing lines and redrawing them in ways that even a kinky mind like mine can’t imagine. It won’t just be an overwhelming transition. For some, it’ll be downright traumatic.

Given that there are over seven billion humans on this planet, there will be a lot of moving parts to this transformation. Before we can even think about taking the first steps in that process, we need to ask ourselves some serious, unsexy questions. As much an optimist as I am, I cannot deny the need for caution here.

That’s why I’ll take a step back, keep my pants out, and ask some of these unsexy questions. I understand this won’t exactly get everyone in the mood, but given the rate at which our technology is advancing, we need to be extra proactive. That way, we can get through the hardest parts of the process and get to the sexy parts.


Uncomfortable Question #1: Who (Or What) Gets To Decide How Much We Enhance Ourselves?

This will probably be the most pressing question once the technology becomes refined enough for the commercial market. Most technology goes through a progression. We saw it with the development of cell phones. At first, only business tycoons and drug lords could afford to use them or even have a use for them, to begin with.

That model might have worked for cell phones. It’s not going to work for something like CRISPR or smart blood. That’s because, unlike cell phones, the poorest and the impoverished are the ones most in need of these tools. They’re also the ones that stand to benefit most, in terms of quality of life.

Historically speaking, though, the government has not treated the poor and impoverished very well. Use the same approach with cell phones and the rich and well-connected will be the only ones to benefit. They’ll also further widen the gap, so much so that they might be even less inclined to share.

That’s why the default answer to this question can’t just be the government or rich business interests. I’m not going to pretend to know who the authority will be or how they’ll even go about distributing these advances to people in a fair and just manner. I just know that our current method will not be sufficient.


Uncomfortable Question #2: How Do We Stop Certain Human Enhancements When They Go Wrong?

When your computer freezes, you reboot it. When the sound on your speakers starts making noises, you turn it off. It’s a beautiful, but underrated thing, having an off-switch. I’m sure we’ve all had people in our lives whom we wish had an off-switch. It’s a necessary fail-safe for a chaotic world that we can’t always manage.

Putting an off-switch on dangerous technology, especially something like artificial intelligence, is just common sense. It would’ve made “The Terminator” a lot shorter and a lot less confusing. With other advancements, especially those involving CRISPR and biotechnology, it’s not as easy as just installing an extra switch.

How do you turn off something that literally rewrites our DNA? How do you stop someone who has grown used to having superhuman abilities, by our standards? That’s akin to asking someone to make themselves sick or hack off a limb because the technology has some side-effects. That’s going to be a tough sell.

Again, I am not smart enough to imagine how a fail-safe for that sort of thing would work. It can’t just rely on blind faith, magical thinking, or whatever other tactic that used car salesmen exploit. It has to be in place and up to speed as soon as this technology goes live.


Uncomfortable Question #3: How Independent/Dependent Will Human Enhancement Make Us?

Smartphones, running water, and free internet porn are great. However, they do require infrastructure. People today are at the mercy of whoever pays their cell phone bill, whoever knows the wifi password, and whoever can stop their toilets from overflowing with shit. To some extent, we all depend on certain institutions to keep our lives and our society going.

In a future of enhanced humans, who have been imbued with traits and abilities that way beyond the scope of our current infrastructure, how dependent or independent can they be in the grand scheme of things?

If they rely on a regular injection of nanobots or need to recharge every other day, then they’re going to have to rely on some form of infrastructure. That may help keep enhanced humans from becoming super-powered Biff Tannens, but it will also give a lot of power to whoever or whatever is supplying those resources.

In a sense, it can’t be one or the other. If enhanced humans are too independent, then they have no reason to interact or aid one another. If they’re too dependent on certain resources, then those controlling those resources become too powerful. There needs to be a healthy balance, is what I’m saying. There will be costs, but we have to make sure that the benefits far outweigh those costs.


Uncomfortable Question #4: How Much Of Our Humanity Do We Keep?

Let’s not lie to ourselves. There’s a lot about the human condition we wish we could change or drop altogether. Personally, I would love to never have to go to the dentist, never have to clip my toe nails, and never have to sleep, which is an advancement that’s closer than you think.

Humanity has has a lot of flaws, which is a big part of what drives the development of these tools. However, there are certain parts about humanity that are worth preserving and I’m not just talking about the health benefits of orgasms. Change too much about our bodies, our minds, and everything in between and we cease to become human. At that point, why even care about other humans?

Maintaining a sense of humanity is what will separate enhanced humans from overpriced machines. Our sense of humanity is a big part of what drives us to live, love, explore, and understand. If we lose that, then we’re basically a very smart rock that’s only interested in maintaining its status as a rock.

To really expand our horizons, we need to preserve the best of humanity. Humans do amazing things all the time that reminds us why humanity is worth preserving. When we start enhancing ourselves, we need to save those traits, no matter what we become.


Uncomfortable Question #5: How Will Society Function In A World Of Enhanced Humans?

We’ve built a good chunk of our society around our inherent flaws, as humans. We form tribes to cooperate and survive in ways we can’t do on our own. We seek leaders who are capable of guiding us to functional, stable society. Granted, sometimes those efforts fail miserably, but the goal is the same.

With human enhancement, the rules aren’t just different. They’re obsolete. So much of our society is built around the idea that we’re still a bunch of cavemen with fancier tools that we really don’t have a concept of how we’ll function beyond that context. We have nation states, national identities, and various tribes to which we bind ourselves.

Those are all still products of our inherent drive towards tribalism. That’s still our default setting, as a species. What happens when we start tweaking those settings? Will things like nation states, government, and social circles even exist? When society is made up of a bunch of superhuman beings who can live forever and never need a sick day, how do we even go about functioning?

This is well-beyond my expertise, as an aspiring erotica/romance writer. It may be one of those things we can’t contemplate until after some of these advances take hold. At the very least, we need to put this question at the top of our to-do list when that time comes.


Uncomfortable Question #6: How Will Human Enhancement Affect Our Understanding Of Family And Love?

This is probably the most pressing question for me, as an aspiring erotica/romance writer. I’ve already highlighted some of the flaws in our understanding of love. Once humanity starts enhancing itself, it may either subvert those flaws or render them obsolete. In the process, though, it may create an entirely new class of flaws to deal with.

What happens to a marriage when the people involved live forever and don’t age? That whole “death do us part” suddenly becomes an issue. What happens when having children is essentially uncoupled from romance, through tools like artificial wombs? What will love even feel like once we start enhancing our brains along with our genitals?

Since all love and passion still starts in the brain, which we’re already trying to enhance, any level of human enhancement will necessarily affect love, marriage, and family. Chances are it’ll take on a very different meaning in a world where marriage is less about tax benefits and more about new forms of social dynamics.

Human enhancement will change a lot about our bodies, our minds, and our genitals. It’ll effect so much more, including how we go about love and family. It’s still impossible to grasp since we’re all still stuck with our caveman brains. However, once that changes, this is just one of many issues we should contemplate if we’re to make the future better, sexier, and more passionate.

Leave a comment

Filed under Sexy Future, Thought Experiment

A Drug That Eliminates The Need For Sleep (Is Almost Here)

Whenever I talk about the possibilities of human enhancement, sexy and otherwise, I do so with the hope that the benefits outweigh the costs. I understand that all progress comes at a cost. I also understand that it’s impossible to know the full extent of those costs until the genie is out of the bottle and the bottle is destroyed.

Never-the-less, I still think the risks we take with future technology are worth taking. In fact, I would argue we have to take them because our caveman tendencies towards tribalism and our inherent vulnerability to bullshit is a clear indication that our current situation is not working well enough. We, as a species, need to improve if we’re going to function on this confined planet.

Certain enhancements will do a lot more than others. I’ve mentioned emerging tools like smart blood, brain implants, and CRISPR. It’s impossible to overstate the kind of impact those advances will have on the human condition. They will be akin to giving a light sabre to a chimp.

Other enhancements, however, will have a more subtle effect. They’re also likely to happen sooner, despite Elon Musk’s best efforts. That brings me back to sleep and the annoying need to spend a third of our lives doing it. I’ve already asked people to consider how their life would change if they didn’t have to sleep as much. Well, I have a confession to make. That was kind of a loaded question.

That’s because that, as we speak, there are efforts underway to reduce or eliminate our need to sleep. This is not some far-off fantasy out of a “Star Trek” re-run. This is actually happening, courtesy of DARPA, also known as the Defense Department’s officially-sanctioned mad science division.

However, there’s nothing mad about their motivations. DARPA is in the business of developing obscenely-advanced technology to ensure that the United States Military remains the most technologically advanced military on the planet by an obscene margin. Part of that effort involves developing technology that creates soldiers that don’t have to sleep.

In the grand scheme of things, that’s one of the least weird projects they’ve pursued. This is a department that is researching flying submarines for crying out loud. As awesome/crazy as those concepts are, this potential breakthrough in sleep technology could have implications that go far beyond having soldiers that don’t require a nap.

According to Wired, DARPA’s years of mad science has culminated in the development of a spray that users would apply, just like ordinary nasal spray. The spray contains a naturally-occurring brain hormone called Orexin A, which helps keep the brain in a state of alertness without the aid of heavy stimulates or copious amounts of coffee.

It’s somewhat crude in that it’s basically dumping chemicals into the brain and hoping for the best. That approach is not that different from those of other psychoactive drugs, which are fraught with all kinds of danger. Unlike other emerging technologies, though, this one is already happening. From here, it’s just a matter of refinement.

At the moment, the technology is basic and unrefined, but that’s how all technology starts. Just look at the models of old cell phones. That refinement will occur, though. There’s too much potential profit in it. Between truckers, grade-grubbing college students, and marathon gaming, there are a lot of people out there who would gladly pay to not have to sleep.

Depending on how much it costs, I would certainly jump at the chance to not feel so damn tired on a Monday morning. It would also give me more time and energy to write more sexy novels or explore more sexy issues on this blog. When sleep becomes optional and you have a lot of stuff you want to do, this sort of technology suddenly becomes invaluable.

I doubt I’m the only one whose life would invariably change, due to this technology, and I’m not just talking about hardcore night owls. Think about all the people who work demanding, energy-sapping jobs. These jobs don’t just put a huge premium on sleep. They can be downright damaging. Take away the need to sleep and suddenly, these people can have a life again.

That, in many ways, is the biggest implication of this technology. Suddenly, that third of our lives that we spend sleeping suddenly becomes open to us. Human society may vary wildly across time, space, and sexual practices, but they’re all bound by the same limits. People still need to sleep and rest. What happens to those societies when that changes?

It’s impossible to know, but we may find out soon enough. As we’ve seen before with other popular drugs, once a market is established, people build entirely new lifestyles around it. We saw it already with boner pills. This one may end up being even more groundbreaking and it doesn’t require an awkward conversation with your doctor.

While this is sure to enrich drug companies to no end, it’s also the first step in a much larger process of removing the burden of sleep. Other emerging technologies that I’ve mentioned, such as smart blood and brain implants, will take it a step further.

Theoretically, they could both rewire or augment our biology so that we never need sleep in the first place. There would be no need to take a drug. There would be no need to worry about ever being tired. It may even make it so that other people who have to sleep are pitied the same way we pity those who don’t have high-speed internet.

These kinds of advancements will already enhance so much of the human condition, from cognitive function to mental acuity to sexual prowess. Removing sleep from that equation gives those same enhanced humans even more time to flex their enhancements. It’s hard to know what people will do with that kind of time on their hands, but I imagine some of it will inspire a few sexy novels.

A society full of people who never need to sleep is completely unprecedented. Hell, a society where sleep is entirely optional is unprecedented as well. It wasn’t that long ago that society was at the mercy of the night. Even if you weren’t tired back then, you couldn’t do much when it was pitch black outside. Then, electric lighting came along and freed people to do more with their time.

When technology gives people an opportunity to work around the limits of nature, they generally take it. The consequences or implications are rarely clear, but given how little we think things through, I can’t imagine we’ll hesitate to make this technology part of our culture.

Time will tell. Money will be made. Entirely new lifestyles will emerge. It’s amazing to imagine what we’ll do with ourselves when sleep is no longer an issue. I hope it helps me write more sexy novels. I also hope it helps others live a richer life. Whenever it happens, I look forward to the day when beds are just used for sex and showing off fancy linens.

1 Comment

Filed under Sexy Future, Thought Experiment

Purging Bad Memories And The (Hidden) Price That Comes With It

Think about the most traumatic experience you’ve ever had. No, this isn’t another thought experiment, nor is it something I’ll put a sexy spin on. It’s an honest, but difficult question to contemplate. Some people don’t even need to contemplate it. Some trauma is so severe that simply asking the question is redundant.

Even if you accept, as I have argued, that the world is getting better and people are generally good, there is still a lot of suffering in this world. There are horrific wars throughout the world, extreme poverty, and gruesome crimes unfolding every day. The crimes themselves are awful, but it’s often the scars they leave on people, mentally and emotionally, that further amplifies the suffering.

Those scars can be pretty debilitating, even after the physical wounds heal. It often manifests in post-traumatic stress disorder, a terrible mental state that effectively locks someone into their scars. Wars, violence, abuse, and criminal victimization can create varying degrees of trauma and coping with that trauma can be a never-ending struggle.

Now, here’s the part where I try to make this discussion less depressing. This is a blog that talks about sexy thoughts, sexy novels, and personal stories involving awkward boners. In general, I want my posts to inspire and, if possible, arouse in the sexiest way possible.

I don’t think it’s possible to make something like dealing with terrible trauma sexy, but it does present an opportunity to discuss something that might not just be a thought experiment within our lifetime. It boils down to one simple question.

“If you could purge traumatic memories from your mind, would you do it?”

If that question sounds familiar, then congratulations. You’ve probably seen one of Jim Carrey’s most underrated movies, “Eternal Sunshine of the Spotless Mind.” Granted, it wasn’t exactly as funny or memorable as “Ace Ventura: Pet Detective,” but it dealt with this question in ways nobody had dared by making the concept of purging memories a simple service to facilitate the process of getting over a loss.

All three “Men In Black” movies streamlined that process even more with their trademark neuralizer, a device that erases peoples’ memories of an incident in a simple flash. When you’re a super-secret government agency trying to hide aliens from the public, it’s kind of a necessity. However, its implications are much greater than simply making life easier for government agents.

Think back to that traumatic experience I mentioned earlier. In addition, think of the many traumatic experiences behind those who suffer from PTSD. All that suffering is built around the memories of those horrible moments. Whether it’s an atrocity in a war, severe child abuse, or a sexual assault, it’s the memory that locks that moment into the mind.

Now, imagine being able to purge that memory from your brain. In an instant, be it a flash by a neuralizer or the service offered in “Eternal Sunshine of the Spotless Mind,” that experience is gone. You didn’t just forget it. As far as your brain is concerned, it never happened.

It’s a concept that “Eternal Sunshine of the Spotless Mind” avoids and is never expanded upon in “Men In Black.” The ability to purge our memories of traumatic experiences has huge implications, even if they’re not as entertaining as watching Will Smith fight aliens. It’s one thing to improve our memories. Actually manipulating them opens up a new world of complications, some of which we might not be ready to confront.

At the moment, we don’t have to because the technology isn’t there yet. While we have a fairly comprehensive understanding of how our brain forms memories, we currently lack the necessary tools to manipulate them. However, those tools are in development.

Once again, I’ll mention Neuralink and the advanced brain implants its hoping to use to augment human cognition. Given how often our brains frustrate us with our inability to keep up with the world or program a goddamn coffee maker, it’s a given that there will be a market for that. Part of that enhancement, though, will likely extent to memories.

It may even be among the early uses for the implants developed by companies like Neuralink. As I write this, PTSD plagues millions of people, many of them military veterans who experienced unspeakable horrors in a war zone. Given the inherent difficulties in treating PTSD, who wouldn’t opt for a better way?

Sure, it involves manipulating our brains, but talk to anyone who can’t sleep, work, or form functional relationships because of their trauma. Some of them would do brain surgery on themselves and accept all the risks that came with it. Some experiences are just that traumatic and I’m not just talking about the ones that involve wars and clowns.

It’s a tragic situation, but one that makes the idea of actually purging those memories from our minds more pressing. Before brain implants like Neuralink start enhancing our minds for the hell of it, they’ll focus on treating those who are sick. It happened with artificial limbs. It will likely happen with brain manipulation.

Due to the wars in Afghanistan and Iraq, we’re already dealing with a significant population suffering from PTDS. Since those wars show no signs of ending, that population will likely grow. Medical science has gotten better at helping soldiers recover from major injuries, but treatments for the brain are still lagging, so much so that governments are considering using MDMA, also known as Ecstasy, to treat PTSD.

Unlike a bullet wound or a broken bone, though, traumatic experiences don’t always heal. Our brain is wired to tie powerful emotions to powerful memories. That’s great for giving us fond memories of the food we eat, the sex we have, and the social bonds we create, but terrible when it comes to dealing with trauma.

In a sense, removing the memories completely may be the only way to actually cure PTSD and allow people to live fully functional lives. Given the incentives, the prevalence, and mankind’s innate ability to make awesome tools, this ability will likely emerge at some point, possibly in my lifetime.

That may be great for those who endure traumatic experiences, but it may come at a price, as all great advancements do. If we live in a world where trauma is so easy to treat and so easy to get rid of, then does that undermine the power of those experiences? Would we, as a species, become numb to those who experience trauma and those who inflict it?

Picture a scenario where someone commits a brutal rape, one that leaves another person so traumatized and scarred that it may haunt them until their dying daze. Right now, we would all want that rapist punished to the fullest extent of the law. However, what if a simple brain implant removes that experience completely while simple medicine treats the wounds?

If the victims has no memory of the experience, no lingering pain, and suffers no ill-effects for the rest of their lives, then do we still treat the rapist with the same disdain? Right now, that’s an unconscionable question to answer. I’m sure there are those who want to strangle me through their computer screens, just by asking it.

First, I apologize if that question causes someone significant distress, but it’s a question worth asking. Once we have the ability to undo all suffering caused by a crime, then will that affect our ability and desire to punish such crimes? No amount of Will Smith fighting aliens can detract from those implications.

At the moment, the technology doesn’t exist, but the trauma doesn’t stop. As decent, empathic human beings, we want to do everything in our power to stop such trauma and heal those wounds. Our efforts may get to a point where we can literally attack the source of that trauma. The questions still remain. What will the hidden cost be and can we stomach that cost?

1 Comment

Filed under Marriage and Relationships, Second Sexual Revolution, Sexy Future