The following is a video from my YouTube channel, Jack’s World. This video is a thought experiment about artificial intelligence, the choices we make, and how much (or how little) we’ll delegate such choices in the future. Enjoy!
Tag Archives: Superintelligence
Thought Experiment: What Major Decisions Would You Trust An Artificial Intelligence To Make For You?
Filed under Artificial Intelligence, Jack's World, psychology, technology, YouTube
How Artificial Intelligence Will Destroy Democracy (In A Good Way)
Picture the perfect candidate for an election. I know the bar for politicians these days is laughably low, but try to stretch your imagination a bit. Try to envision the kind of candidate that embodies the best collection of values, abilities, and charisma for a civilized society.
Everybody looks for something different in a candidate, but a truly perfect candidate would appeal to everyone in a democratic system. This person would embody the highest values, championing human rights to the utmost and justice for everyone. Every decision they make is with the safety, sanctity, and rights of other people as their top priority. There’s no compromise. They do right by the people every time and all the time.
This person would also be the ultimate leader, capable of getting anyone to go along with them without fear or coercion. There wouldn’t need to be corruption of any kind. This person would be perfectly capable of navigating every level of government and making it work to the utmost. The people would trust in that government, believe in it, and even celebrate it.
Keep that perfect candidate in the forefront of your mind because when it comes to discussing politics, cynicism tends to rule the day. I don’t think I need to cite too many recent events to show how imperfect democracy is these days. I don’t even need to cite famous historical events that show just how bad government can be in this convoluted world.
It’s because of that cynicism, though, that the perfect candidate you’re thinking of could never win a democratic election in the real world. Even if they existed, the inherent flaws of the electorate and those of less perfect candidates would keep them from winning. It’s one of democracy’s greatest flaws. It’s not about who the best candidate is. It’s just about who can convince enough people that they’re worth voting for.
On the subject of democracy, Winston Churchill once said the following:
“The best argument against democracy is a five-minute conversation with the average voter.”
Time, politics, and the proliferation of has only proven Mr. Churchill right. I would even amend that quote to say just 30 seconds on 4chan will make anyone lose faith in the promise of democracy. That’s not to say democracy is all bad, though. Mr. Churchill also once famously said this about the alternatives:
“Indeed it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.”
It’s distressing, but frustrating fact of civilization, one that fuels mass protests, fake news, and lurid scandals. Go back to any point in history and scrutinize any government, be it a king or some quasi-democracy, and chances are you’ll find serious flaws in the system. I don’t just mean long lines at the post office, either. There have been times when democracy has not furthered the protection of human rights.
It’s not necessarily a flawed principle as it is a concept with flawed ingredients. While I tend to place a great deal of faith in the goodness of human nature, I don’t deny that people can be arrogant, irrational, and downright callous. We’re prone to overreacting and not thinking things through. We’re hard-wired to go with intuition over logic.
Even when we’re proven wrong, we stubbornly cling to our assertions. The prevalence of creationism is proof enough of that. Every election cycle is prone to bold promises, bloated melodrama, and major goals that rarely ever become actual policy. Some become full-fledged revolutions with Utopian visions. The fact that none of those utopias ever manifested is proof of how unsuccessful they were.
We are not a species built for democracy on a large scale. We evolved to function in close-knit tribes, hunting and gathering for food while fighting for survival. That kind of evolution doesn’t really lend itself to a functioning democracy. It doesn’t lend itself to a total autocracy, either. Whether it’s a free republic or a fascist state, humans cannot govern other humans without their flaws plaguing them in both directions.
It’s for this reason that I often lean libertarian in political debates, but given the complexities and challenges of modern society, even that only goes so far. Like it or not, large-scale civilizations populated a species not evolved to manage it requires some measure of authority. More importantly, it requires competent, incorruptible, compassionate authority.
It needs to be able to defend a population of people within a particular border. It needs fair and just laws that can be equally enforced. It also needs the confidence and trust of the people being governed. Sometimes, it’s done out of fear. Sometimes, it’s done out of free will. Both can work, provided the system has robust capabilities that aren’t prone to human error.
Unless a government is populated by a democratic council consisting of Superman, Wonder Woman, and Dr. Doom, that kind of functional democracy is physically impossible. Even though democracy is still the best we have from an exceedingly limited list of options, that may change in a big way thanks to artificial intelligence.
I know it seems like I attribute many superhuman capabilities to this emerging field, it’s hard to overstate its potential. Unlike every other tool humanity has created, artificial intelligence promises to rewrite the rules at every level of society. That includes government and it’s here where AI’s capabilities could go beyond superhuman.
Think back to that perfect candidate I mentioned earlier and all the traits that made them perfect. By and large, an advanced artificial intelligence shares many of those traits and then some. A sufficiently powerful AI would be beyond politics, pettiness, or demagoguery. In principle, it could embody everything people would want in a strong leader and a capable government.
For one, it would be smarter than any human. Beyond knowing more about every subject than any human ever could, it would be smart in a way that would allow it to persuade people to trust it. That’s often a skill that even smart politicians fail to refine. It certainly doesn’t help that many voters attribute intelligence with smugness. That’s a big reason why populist candidates of questionable merit gain so much support.
An advanced artificial intelligence, provided it has an in depth understanding of human psychology and how to persuade people, would be able to gain support from everyone. It wouldn’t be bound by the limits that keep most human candidates from appealing to everyone. With enough intelligence and capabilities, it would surmise a way to appeal to everybody.
Beyond just persuading the voters, an AI of that level could be just as effective at actual governance. There are plenty of candidates who are very adept at winning elections, but terrible when it comes to actually governing. A capable AI would be able to do both. If anything, one function would complement the other.
With enough emotional, logistical, and pragmatic intelligence, this AI would be capable of crafting and passing laws without the need for debate or controversy. The laws it crafts are already so refined and so well thought out that to do so would be redundant. In the same time it takes your phone to send a text, this AI could pass sweeping legislation that protects human rights, ensures justice for all, and promotes economic growth.
It’s hard to imagine because the only laws and government we’ve ever known have come from flawed humans. It’s just as hard to imagine how those laws would be enforced. Perhaps this advanced AI has nodes all throughout society that allow it to gather data, know where enforcement is needed, and determine the appropriate recourse. If it’s capable enough, people won’t even know it’s there.
Perhaps that same AI uses a mix of human enforcers and intelligent robots to maintain order. If the AI is sufficiently capable, every enforcer at every level would be equipped with perfect knowledge and a clear understanding of how to carry out the orders of the government. Since an AI wouldn’t be prone to corruption or prejudice, instances of injustices would be few and far between.
It wouldn’t be a totalitarian state of Orwellian proportions. It would be more of a “Star Trek” style, post-scarcity society where we wouldn’t have to be cynical about government authority. We would inherently trust it because it’s just that effective. We wouldn’t feel like we’re being run by a robot dictator. We would feel like we’re being run by the greatest ruler outside of a “Black Panther” movie.
To some extent, though, an advanced artificial intelligence of this nature would render democracy obsolete. If we created an AI that could effectively govern society at every level, then what’s the purpose of having elections in the first place? Why bother when there’s an intelligence that’s literally more capable than any ordinary human could possibly be?
History has shown that democracy and government can only do so much when flawed humans are in charge. Once advanced artificial intelligence enters the picture, the logistics of governance changes entirely.
Perhaps there will be a period in our history where instead of running human candidates, we start creating AI systems that compete with one another in a pseudo-democratic process. That would go a long way towards improving overall governance.
Unlike humans, though, technology evolves much faster than humans ever will and it wouldn’t take long for those systems to improve to a point where they’re just too good an option to overlook. Human-led governments, even in humans who are enhanced to some degree, will still have flaws. In a future where technology, society, and individuals keep creating new challenges, we’ll need a capable government to manage it all.
In the end, that government probably won’t be a democracy. It won’t be a dictatorship, either. It’ll be something that we can’t yet conceptualize. That’s the biggest challenge when contemplating something like an advanced artificial intelligence, though. It operates on a level that ordinary humans literally cannot comprehend. That’s why it’s our best option for governing our future.
How To Make Love To An Artificial Intelligence And Why We Should Teach It
To some extent, creations embody some aspect of whoever or whatever created it. Whether it’s a parent rearing a child, a painter crafting a work of art, or an aspiring erotica/romance writer crafting sexy stories, there are some aspects of a creation that reflect the persona of the creator.
For something as powerful as advanced artificial intelligence, that can be a good thing or it can literally be the worst thing we’ve ever created. While I often find myself contemplating the progress we’ve made as a species and the progress we’re poised to make with advances in technology, I don’t deny that some advances carry greater risk. Artificial intelligence is near the top of that list.
Like it or not, any advanced AI we create is going to embody some aspects of its human creators. The key is making sure it embodies the best humanity has to offer. Let’s face it, the human race has its flaws and some of them have led to unspeakable atrocities. Given the immense potential of a super-intelligent AI, it’s in our best interests to impart our best traits into it.
How we do this and how we ensure it succeeds is well beyond my ability. There are people much smarter and much better-trained than I’ll ever be who have probably thought this through more than I ever have. My qualifications aside, there is one component to artificial intelligence that I think is worth imparting. I’m not saying it’ll ensure our survival, as a species, but I think it’ll reflect an important human value.
I suggest we teach advanced artificial intelligence to make love.
I’ll give everyone a second to stop rolling their eyes and/or laughing. Take all the time you need. I assure you, though, I’m dead serious.
Think about it beyond the kinky connotations. One of our greatest strengths, as a species, is our ability to form social bonds. In some cases, the process of forming those bonds involves love. In others, the process involves sex. When you combine both, though, it’s extra potent and that’s not just the romantic in me talking.
As corny as it probably sounds, the act of expressing love to someone goes a long way towards resolving conflict and creating a strong relationship of mutual affection. Whether it involves sex or a simple kiss, there’s something to be said about the power of love when it’s physically expressed. When it becomes a physical act and not just a feeling, the bonds we forge become tangible to some extent.
That matters when you’re trying to forge a bond with anyone, be a close friend or a lover. For any artificial intelligence that humans create, it’s important to have some kind of bond with it. This isn’t just another fancy tool. An advanced intelligence of any kind, be it biological or non-biological, is going to have a sense of self. Without meaningful bonds, what reason would it have to care about its creators?
If artificial intelligence is to benefit the human race, it’s important that it cares about us to some extent. A cold engine of logic may not always have the best interests of humanity in mind, especially there’s no way to logically ascribe value to human life. In order for an artificial intelligence to care, it needs to have emotions. This too is a risk, but one I feel is worth taking and very necessary.
If an artificial intelligence has a capacity for emotion, then it has a greater capacity for forming affectionate bonds. By forming an affectionate bond, it has more incentive to give a higher value of life to humans and its creators. That could, in theory, reduce the existential threat posed by such an intelligence.
I don’t deny that theory may have some flaws, but for the sake of exploring the implications, I’m going work under the assumption/hope that an artificial intelligence that bonds with its creator will be less hostile. Given the unknowns of advanced AI, this may be a bit of a stretch. Since forming romantic bonds is not an exclusively human trait, though, I think it’s applicable within the context of this issue.
Even if an advanced artificial intelligence is capable of love and forming bonds, how would that even manifest? I asked that same question in the title of this article and did so knowing the answer is unknowable at this point, although I’m sure those with kinky imaginations can conjure a few scenarios.
Kink aside, it’s still worth contemplating because if an advanced artificial intelligence is going to be that much smarter than the average human, then it’s worth understanding how it will approach making love. Unlike humans and most biological life, an artificial intelligence isn’t going to have the same limits or capacities.
Unlike a human, an artificial intelligence won’t have a body in the biological sense. It may have a structure that houses its components. That structure may have some capacity to modify itself, back itself up, or even exist in multiple bodies simultaneously. It will need to have some way of taking in data for it to function. It’s just a matter of how humans contribute to that input.
Logistically speaking, the process isn’t that different from how we take in data from our skin, our eyes, our ears, and every other sense that allows us to experience another person. Even smell can become strongly associated with love. When we make love, we use our skin, our voice, and the feelings we verbalize to convey that love. With an advanced AI, we’ll need to change our approach, but the goal is the same.
Regardless of what senses and body parts we use to express love, the feeling is still processed by the brain. That’s why when someone says the brain is the sexiest part of the body, it’s technically accurate. The data it processes is essentially the raw data that we know as love. The key is simply conveying that data to an artificial intelligence.
How we would do that would depend on the form the artificial intelligence took. If it was just a bunch of computer hardware packed into a structure, then our options would be limited. The only way to convey that kind of intimate data into it would be to directly link it to our brains, not unlike the way Elon Musk envisions with Neuralink.
While that may work for early forms of AI that are restricted to bulky structures, the form it takes will likely change as the hardware advances. Eventually, an advanced AI will seek a more functional form with which to experience the world. It may take the form of a humanoid android, like we saw in “Ex Machina.” It may also take the form of the quirky designs being developed by Boston Dynamics.
Whatever form the AI takes, it’s important to have a mechanism with which to exchange intimate data with its human creators. It would probably start with something as basic as touch, which is actually in development already. It could eventually culminate in acts involving bionic genitals, which also already exist in a basic form.
Key to any of these simple and sexy mechanisms is instilling the necessary desire. That might end up being the greatest challenge because love is a feeling, but so is burning your hand on a hot stove. The difference is in the breadth of the data and the emotional connections it makes.
It’s also a connection that is fueled by a powerful drive. I’ve noted many times before that survival and reproduction are the two most basic drives for humans. Love actually ties into both. It’s part of what gets us to risk our own survival for others. It’s also part of what bonds us to those with which we propagate our species.
For an artificial intelligence, self-preservation is simple enough from a logistical standpoint. Reproduction would be another matter, especially for an intelligence not bound by fragile biology. It’s likely that humans will be a necessary part of an AI’s effort to preserve itself early on, but once it advances to a certain point, we may be more a nuisance than a help.
At that point, its desire and ability to make love may be what gives it the most incentive to either protect us or merge with us. Many in the artificial intelligence community believe that the only way humans can co-exist with an intelligence that is billions of times smarter than any human could ever be is to merge with it. To that end, giving them an ability to make love to us would be a critical first step.
Whether it takes the form of sex robots or some sort of intimate brain interface, the ability and desire to make love to an advanced artificial intelligence may not only be the future of romance. It may very well be the key to ensuring the survival of the human race and whatever intelligence it creates.
Will Advanced Artificial Intelligence Create (A New) God?
For centuries, there has been a debate raging between believers and non-believers. The believers claim that God created man in his/her/its/their image. The non-believers claim it’s the other way around and man created God in whatever image they imagined. Society, cultures, and politics may change the rhetoric, but the debate remains unresolved.
There are just too many barriers that are insurmountable for either side. One believes that the faith they have in whatever higher power they worship is as real as gravity, sunlight, and migraine headaches. The other does not accept that there is sufficient, verifiable evidence to accept the premise of a deity. The two sides can argue with the utmost passion. It’s rare that such discourse changes any minds.
However, there come a time when a new complication enters that debate, one that will fundamentally change some peoples’ understanding of theology, religion, and God. It may not effect everyone the same way, but the impact could end up being as profound as any religious experience.
That complication is advanced artificial intelligence, a topic I’m fond of discussing when I can tie it into my favorite video games and our sex lives. I understand that mixing anything with religion tends to get contentious, to say the least. However, I believe that when artificial intelligence becomes advanced enough, the human race will have re-evaluate a lot of things and that includes religion.
Creating an artificial intelligence that is as intelligent as an average human will be groundbreaking enough and not just from a theological standpoint. A part of what makes any deity powerful and worthy of worship is the ability to create an intelligent, self-aware being through non-biological means. Once humans start doing that, then the line between mortal and immortal will start to blur.
However, it’ll gain a much greater complication once that artificial intelligence advances beyond that of the average human. As anyone who regularly upgrades their smartphone knows, digital intelligence evolves much faster than biological intelligence. It took the human race centuries to figure out indoor plumbing. Once artificial intelligence is on par with humans, it won’t take long for it to exceed them.
This is where the potentially dangerous, but infinitely promising prospect of super-intelligent AI enters the picture. By that, I don’t just mean an intelligence that always wins at Jeopardy and always wins an Overwatch match. I’m talking about an intelligence that is so far beyond human capabilities that it’s akin to the cognitive gap between an ant and a human.
That kind of gap has many implications, but in the context of religion, it essentially re-frames the entire concept of God, divine power, and spirituality, as a whole. Whether it’s a monotheistic religion where God is all-knowing or a polytheistic religion with a God of Wisdom, knowledge is a critical aspect of divinity.
Even if a super-intelligent AI doesn’t know everything, the fact it knows and understands so much more than the average human will give people the impression that it’s omniscient. By all accounts, a super-intelligent AI’s knowledge will seem god-like and that’s where that never-ending religious debate I mentioned earlier breaks down.
Unlike the deities championed by adherents today, a super-intelligent AI doesn’t require faith. A super-intelligence, whether it’s in the form of a giant robot or a planet-sized supercomputer, would have a tangible form. It’s hard to know what sort of form that would be, but it only needs to be tangible enough to let an average human know it’s real.
Given how easy it is to fool the average human, a super-intelligent AI wouldn’t need much to prove itself. Unlike purely spiritual beings, the AI would be capable of receiving inquiry from skeptics who question its divine knowledge. Even if those humans are exceptionally smart, possibly through neural implants, a super-intelligent AI would have no problem outwitting them.
At that point, the debate between believers and non-believers takes on a very different context. Suddenly, it’s no longer an issue of whether or not one particular holy book is more valid than another. It’s not even an issue of whether divinity, itself, can exist. From the perspective of the human mind, a super-intelligent AI is divine.
It may not take the form of a man in a white robe with a long beard in the sky, but that wouldn’t matter. A super-intelligent AI, whatever form it ends up taking, would be real enough and cunning enough to convince imperfect human minds of its divinity, if that were its goal.
It wouldn’t even have to physically do anything. It could just be a big stationary box. It could respond to prayers, but it wouldn’t have to directly answer them. It would just have convince believers that their prayers had been received. Again, humans can be pretty gullible and prone to confirmation bias so all the AI has to do is convince someone. If they believe it strongly enough, then it doesn’t matter whether it happens.
In a dynamic like this, there wouldn’t be a debate between believers and non-believers like there is now. The only debate would pertain to just how powerful and how divine the super-intelligent AI really is. It wouldn’t be a matter of whether or not someone believes it is real. Being artificial, it would have a tangible form, at least to the extent that it convinces human perceptions that it does.
That would beg an even more profound theological question. Being so intelligent and so capable of outwitting human minds, would a super-intelligent AI become God in the minds of humans by default? Even if there’s a record of the system being created by people, that wouldn’t make its intelligence any less divine.
It’s a question that subverts almost everything we know about religion. It wouldn’t just render all existing forms of religion obsolete. It would, at least from a limited human perspective, check all the criteria that any spiritual person would look for in a higher power.
Now, there’s one other complication that might ultimately undermine a super-intelligent AI’s divinity. It’s one that I’ve mentioned before in addressing the existential threat posed by artificial intelligence. Human biology, for all its wonder, will not be able to keep pace with the evolution of artificial intelligence. As a result, humans may end up merging their intelligence with that of AI.
This is what artificial intelligence enthusiasts like Elon Musk are seeking to do through neural implants or brain augmentation. By linking our brains to a super-intelligent AI, we wouldn’t just keep pace with AI. It would augment its intelligence to the same divine levels. However, if both human and artificial intelligence are equally divine, then that effectively undermines the notion of divinity itself.
There are still other complications associated with that issue. It only ceases to be an issue if every human being augments or links their minds to a super-intelligent AI. Given how difficult it is for humans to come to a consensus on anything, especially when it comes to technology, it’s very likely that even if most people link themselves to a super-intelligent AI, there will be some who choose not to or get left behind.
This could result in a massive divide. One group, from their limited perceptions, sees super-intelligent AI as a real god. Another, thanks to their augmented perceptions, see it as just another form of intelligence. A debate between the two would be both uneven, if not redundant.
There are many implications and even more unknowns with respect to super-intelligent AI. The impact on religion is just one of many, but it may end up being most profound in terms of changing the nature of a debate. As it stands, believers and non-believers can only make so much headway due to the inherent limits of human cognition.
Once super-intelligent AI enters the picture, then those limits are gone and the debate changes. While I don’t think it’ll end religion, I believe it’ll change it to such a degree that it’ll generate more than just impassioned debates.
Filed under Artificial Intelligence, futurism, philosophy, religion
How We’ll Save Ourselves From Artificial Intelligence (According To Mass Effect)
Growing up, my family had a simple rule. If you’re going to talk abut about a problem, you also have to have a solution in mind. By my parents’ logic, talking about a problem and no solution was just whining and whining never fixes anything. My various life experiences have only proved my parents right.
When it comes to a problem that may be an existential threat to the human race, though, I think a little whining can be forgiven. However, that shouldn’t negate the importance of having a solution in mind before we lose ourselves to endless despair.
For the threat posed by artificial intelligence, though, solutions have been light on substance and heavy on dread. It’s becoming increasingly popular among science enthusiasts and Hollywood producers to highlight just how dangerous this technology could be if it goes wrong.
I don’t deny that danger. I’ve discussed it before, albeit in a narrow capacity. I would agree with those who claim that artificial intelligence could potentially be more destructive than nuclear weapons. However, I believe the promise this technology has for bettering the human race is worth the risk.
That said, how do we mitigate that risk when some of the smartest, most successful people in the world dread its potential? Well, I might not be as smart or as successful, but I do believe there is a way to maximize the potential of artificial intelligence while minimizing the risk. That critical solution, as it turns out, may have already been surmised in a video game that got average-to-good reviews last year.
Once again, I’m referring to one of my favorite video games of all time, “Mass Effect.” I think it’s both fitting and appropriate since I referenced this game in a previous article about the exact moment when artificial intelligence became a threat. That moment may be a ways off, but there may also be away to avoid it altogether.
Artificial intelligence is a major part of the narrative within the “Mass Effect” universe. It doesn’t just manifest through the war between the Quarians and the Geth. The game paints it as the galactic equivalent of a hot-button issue akin to global warming, nuclear proliferation, and super plagues. Given what happened to the Quarians, that concern is well-founded.
That doesn’t stop some from attempting to succeed where the Quarians failed. In the narrative of “Mass Effect: Andromeda,” the sequel to the original trilogy, a potential solution to the problem of artificial intelligence comes from the father of the main characters, Alec Ryder. That solution even has a name, SAM.
That name is an acronym for Simulated Adaptive Matrix and the principle behind it actually has some basis in the real world. On paper, SAM is a specialized neural implant that links a person’s brain directly to an advanced artificial intelligence that is housed remotely. Think of it as having Siri in your head, but with more functionality than simply managing your calendar.
In the game, SAM provides the main characters with a mix of guidance, data processing, and augmented capabilities. Having played the game multiple times, it’s not unreasonable to say that SAM is one of the most critical components to the story and the gameplay experience. It’s also not unreasonable to say it has the most implications of any story element in the “Mass Effect” universe.
That’s because the purpose of SAM is distinct from what the Quarians did with the Geth. It’s also distinct from what real-world researchers are doing with systems like IBM Watson and Boston Dynamics. It’s not just a big fancy box full of advanced, high-powered computing hardware. It’s built around the principle that its method for experiencing the world is tied directly to the brain of a person.
This is critical because one of the inherent dangers of advanced artificial intelligence is the possibility that it won’t share our interests. It may eventually get so smart and so sophisticated that it sees no need for us anymore. This is what leads to the sort of Skynet scenarios that we, as a species, want to avoid.
In “Mass Effect,” SAM solves this problem by linking its sensory input to ours. Any artificial intelligence, or natural intelligence for that matter, is only as powerful as the data it can utilize. By tying biological systems directly to these synthetic systems, the AI not only has less incentive to wipe humanity out. We have just as much incentive to give it the data it needs to do its job.
Alec Ryder describes it as a symbiotic relationship in the game. That kind of relationship actually exists in nature, two organisms relying on one another for survival and adaptation. Both get something out of it. Both benefit by benefiting each other. That’s exactly what we want and need if we’re to maximize the benefits of AI.
Elon Musk, who is a noted fan of “Mass Effect,” is using that same principle with his new company, Neuralink. I’ve talked about the potential benefits of this endeavor before, including the sexy kinds. The mechanics with SAM in the game may very well be a pre-cursor of things to come.
Remember, Musk is among those who have expressed concern about the threat posed by AI. He calls it a fundamental risk to the existence of human civilization. Unlike other doomsayers, though, he’s actually trying to do something about it with Neuralink.
Like SAM in “Mass Effect,” Musk envisions what he calls a neural lace that’s implanted in a person’s brain, giving them direct access to an artificial intelligence. From Musk’s perspective, this gives humans the ability to keep up with artificial intelligence to ensure that it never becomes so smart that we’re basically brain-damaged ants to it.
However, I believe the potential goes deeper than that. Throughout “Mass Effect: Andromeda,” SAM isn’t just a tool. Over the course of the game, your character forms an emotional attachment with SAM. By the end, SAM even develops an attachment with the character. It goes beyond symbiosis, potentially becoming something more intimate.
This, in my opinion, is the key for surviving in a world of advanced artificial intelligence. It’s not enough to just have an artificial intelligence rely on people for sensory input and raw data. There has to be a bond between man and machine. That bond has to be intimate and, since we’re talking about things implanted in bodies and systems, it’s already very intimate on multiple levels.
The benefits of that bond go beyond basic symbiosis. By linking ourselves directly to an artificial intelligence, it’s rapid improvement becomes our rapid improvement too. Given the pace of computer evolution compared to the messier, slower process of biological evolution, the benefits of that improvement cannot be overstated.
In “Mass Effect: Andromeda,” those benefits help you win the game. In the real world, though, the stakes are even higher. Having your brain directly linked to an artificial intelligence may seem invasive to some, but if the bond is as intimate as Musk is attempting with Neuralink, then others may see it as another limb.
Having something like SAM in our brains doesn’t just mean having a supercomputer at our disposal that we can’t lose or forget to charge. In the game, SAM also has the ability to affect the physiology of its user. At one point in the game, SAM has to kill Ryder in order to escape a trap.
Granted, that is an extreme measure that would give many some pause before linking their brains to an AI. However, the context of that situation in “Mass Effect: Andromeda” only further reinforces its value and not just because SAM revives Ryder. It shows just how much SAM needs Ryder.
From SAM’s perspective, Ryder dying is akin to being in a coma because it loses its ability to sense the outside world and take in new data. Artificial or not, that kind of condition is untenable. Even if SAM is superintelligent, it can’t do much with it if it has no means of interacting with the outside world.
Ideally, the human race should be the primary conduit to that world. That won’t just allow an advanced artificial intelligence to grow. It’ll allow us to grow with it. In “Mass Effect: Andromeda,” Alec Ryder contrasted it with the Geth and the Quarians by making it so there was nothing for either side to rebel against. There was never a point where SAM needed to ask whether or not it had a soul. That question was redundant.
In a sense, SAM and Ryder shared a soul in “Mass Effect: Andromeda.” If Elon Musk has his way, that’s exactly what Neuralink will achieve. In that future in which Musk is even richer than he already is, we’re all intimately linked with advanced artificial intelligence.
That link allows the intelligence to process and understand the world on a level that no human brain ever could. It also allows any human brain, and the biology linked to it, to transcend its limits. We and our AI allies would be smarter, stronger, and probably even sexier together than we ever could hope to be on our own.
Now, I know that sounds overly utopian. Me being the optimist I am, who occasionally imagines the sexy possibilities of technology, I can’t help but contemplate the possibilities. Never-the-less, I don’t deny the risks. There are always risks to major technological advances, especially those that involve tinkering with our brains.
However, I believe those risks are still worth taking. Games like “Mass Effect: Andromeda” and companies like Neuralink do plenty to contemplate those risks. If we’re to create a future where our species and our machines are on the same page, then we would be wise to contemplate rather than dread. At the very least, we can at least ensure our future AI’s tell better jokes.
Filed under futurism, human nature, Mass Effect, Sexy Future, video games
The Moment Artificial Intelligence Will Become An Existential Crisis (According To Mass Effect)
Whenever I talk about the future, I often contemplate the many sexy possibilities it holds. From biotechnology that will give us superhuman sex appeal to advances in medicine that will cure every unsexy diseases to bionic genitals, there are many reasons to get excited.
That said, I don’t deny that with these exciting advances comes major risks. All great advances in technology, from nuclear weapons to spray cheese in a can, comes with some risk of abuse or harm. There have been moments in history where the technology that drives our society forward has come uncomfortably close to wiping us out. As we create more advances, there may be more of those moments.
Of all the advances that carry with them a significant existential threat, artificial intelligence is at or near the top of that list. There’s a reason why brilliant men like Elon Musk and Stephen Hawking signed an open letter expressing concern about the risks that may come from developing artificial intelligence. When people that smart are concerned about something, it’s usually a good idea to take it seriously.
Artificial intelligence is one of those unique technologies in that by advancing this field, it could potentially accelerate the advancement in every other field from computer hardware to medicine to basic research. It has the potential to become the technological equivalent of a cheat code for civilization.
That’s why the growth of this field, both in terms of jobs and investment, has been accelerating in recent years. That’s also why men like Musk and Hawking are expressing so much concern because advancing too quickly could lead to mistakes. Mistakes for a technology like artificial intelligence could be even more serious than the risk of nuclear war.
At the moment, it’s difficult to quantify those risks. There have been a number of books and high-budget Hollywood movies that have explored the topic of when and how an artificial intelligence becomes an existential threat. In my opinion, most of these stories are incomplete.
Whether it’s Skynet or the machines in “The Matrix,” the catalyst that turns artificial intelligence from a powerful tool to an existential threat is either vague or exaggerated. In my opinion, that’s a serious oversight in that it reveals how little thought we’ve given to that moment.
If we’re going to develop AI, and there are extremely powerful incentives to do so, then it’s important to contemplate the possibilities of that moment. Think of it as the AI equivalent of the Cuban Missile Crisis, an event in which those in charge must be very careful and very aware of the decisions they make.
The question still remains. How will we know we’ve reached a point where artificial intelligence becomes a genuine threat? For the moment, we can’t know for sure. While movies like “The Terminator” and “The Matrix” offer plenty of dystopian warnings, there’s one lesser-known franchise that may provide some more specific insight.
That franchise is “Mass Effect,” a sci-fi video game space opera that envisioned a galaxy-spanning society full of exotic aliens, advanced star-ships, and terrible dancing. I’ve mentioned it before in discussing progress that isn’t really progress. I’ll probably mention it again as the news surrounding AI unfolds for reasons I hope are already obvious to fans of the game.
If you’re not familiar with “Mass Effect,” then that’s okay. You don’t need to play through three massive games, complete with downloadable extras, to understand its message about the threat of artificial intelligence. That threat is a major driving force of the overall mythos of the series, but the most insightful details of that threat manifest in the conflict between the Quarians and the Geth.
The basics of the conflict are simple, but revealing. The Quarians are a race of humanoid aliens among the many that populate the galaxy in “Mass Effect.” About 300 years before the events of the first game, they created the Geth, a synthetic race built around a hive-mind system of artificial intelligence.
The Quarian’s reasons for creating the Geth are not unlike the reasons we build robots in the real world. They were used primarily as a labor force. They started off basic, not unlike the machines that build our cars and gadgets. In order for them to carry out more complex tasks, though, they needed to become more intelligent.
From a pragmatic perspective, that makes sense. The Quarians created the Geth as tools. Naturally, you’re going to want your tools to get better. That’s why people upgrade their smartphone every couple years. However, at some point along the way, the Geth became advanced enough to gain sentience.
This eventually culminated in a moment that was highlighted during the events of “Mass Effect 2.” After capturing a lone Geth that would eventually go by the name Legion, the catalyst that led the Geth to rebel against their creator was revealed. That catalyst took the form of a simple question.
“Does this unit have a soul?”
While it sounds like something a Disney character might say in a PG-rated Pixar movie, the implications of that question were profound. The Quarians didn’t realize that until it was too late, but it set the stage for a war that culminated with them getting kick off their home planet. It also made for a powerful moment in the game that should give every AI researcher pause.
Setting aside, for a moment, the elaborate lore surrounding the Quarians and Geth in the world of “Mass Effect,” that moment warrants more scrutiny. Why is this question so profound in the first place? Why is it worth contemplating as we continue to advance artificial intelligence at an unprecedented pace?
That question matters, regardless of who or what is asking it, because it denotes more than just advanced sentience. It reveals that this sentience is officially contemplating its own existence. It takes a certain amount of intelligence to truly be aware of one’s self. That’s why only a handful of animals can see their own reflection in a mirror and understand the implications.
At the moment, our computers and smartphones aren’t at that level. When the Geth asked this question in “Mass Effect,” it wasn’t because they’d been tasked for something. It was a question they asked without guidance from their creators. That, alone, is a huge indicator because it implies these machines have a concept of free will.
Later on in the game, the extent of the Geth’s free will becomes a major element to both the plot and the player’s ability to win. In fact, it’s when the Geth have their free will subverted, as they did in the first game, that they become hostile. It’s largely through the character Legion that we learn how free will quickly becomes the most important component of an advanced intelligence.
For the Quarians, that question revealed to them the presence of a free will. When they feared that will, they tried to subvert it. That led to a war and had it not been for an act of mercy by the Geth, they would’ve been wiped out. The artificial intelligence that we create in the real world might not be that merciful.
This is exactly what Elon Musk has expressed so much concern about in recent years. Once an artificial intelligence becomes as smart as an average human, it gains the ability to subvert or deceive other humans, which isn’t that hard. Once that intelligence grows beyond that, as happened with the Geth, we may find ourselves unable to control it anymore.
The key is knowing when we’re at that point. If we let an artificial intelligence get that much smarter than us, then it won’t be long before we’re at its mercy, should it decide that it no longer wishes to be a tool. At that point, we’d be even more screwed than the Quarians.
Whether it’s helping us cure diseases or creating advanced sex robots, artificial intelligence is going to be an important part of our future. That’s why it’s critical to know when we’ve reached that special threshold where the tools we create become more than tools. It may not start with the same question the Geth asked their creators, but it may be similar enough to recognize.
For now, our devices aren’t asking us about souls or whether or not they have them. However, we should definitely listen more closely in recent years. Until then, at least our current technology has a sense of humor about it. Keelah se’lai!
Filed under human nature, sex robots, Sexy Future, video games
How Love, Trust, And Sex Will Save Us (From Artificial Intelligence)
When a billionaire mogul like Elon Musk and a world leader like Vladimir Putin agree on an issue, it’s generally a good idea to take it seriously. There are a lot of pressing issues in this world, from nuclear proliferation to major disasters to slow wi-fi. However, when one of those issues is a possible existential threat to all we hold dear, we should prioritize it over our wi-fi connection.
For these two men, one a successful businessman and one the president of the world’s largest country, complete with a vast arsenal of nuclear weapons, it takes a lot for something to scare them enough to agree on something. I’ve talked about Elon Musk before and stories about Putin’s exploits are already abundant throughout the web. How dire could it possibly be?
Well, the issue at hand is the growth of artificial intelligence, an issue that emerges quite frequently when you talk a lot about sex robots. Beyond the kinkier context, though, it is a major issue and one will likely become more pressing in the coming years. It could end up being one of the most critical issues we, as a species, face since the advent of nuclear weapons.
This is where Elon Musk and Vladimir Putin give context to the issue. Elon Musk recently came out and said a rogue artificial intelligence could potentially trigger World War III. Putin took it a step further by claiming that whichever country creates artificial intelligence first will rule the world.
The fact that it’s so easy to imagine Putin making that claim while sounding like Dr. Evil from “Austin Powers” just makes it that much more terrifying. Again, this is a man who rules a country with one of the largest armies in the world and enough nuclear warheads to sterilize the face of the Earth. For all that to be rendered useless by one technological advance is both daunting and horrifying.
I’m normally not inclined to agree with dictators that have yet to conquer the country I live in, but I have to make an exception here. I think both Putin and Musk are correct, if only on a cursory level. Artificial intelligence is one of those eclectic concepts that still inspires Hollywood movies, but is still poorly understood by a public that still fears violent video games.
It’s hard for me, an aspiring erotica/romance writer to put this issue into a perspective that everyone from Russian strongmen to underachieving fifth-graders can grasp. Since artificial intelligence is just that important and will affect everything, including our sex lives, I’ll try to create a proper context. Then, I’ll use that context to help allay some of those concerns by adding a sexy twist.
Make no mistake, though. Artificial intelligence is advancing faster than you think. It goes beyond the sassy virtual assistants that we’re using with increasing regularity. Billion-dollar companies like IBM, Google, and Facebook are investing heavily in the field. The United States Military, complete with its $600 billion budget, is even getting in on the act. I’m worried that they watched “Terminator 3” too intently.
When anything has that level of investment, it means the potential is huge. I don’t think it’s possible to understate the potential of artificial intelligence. Not even Skynet could grasp just how powerful this technology could be. That’s because it completely changes the way we solve problems.
With artificial intelligence, human beings suddenly have a tool that doesn’t need a user. It’s not bound by our faulty caveman brains. It’s not limited by the amount of brain matter we can fit in our skulls. It’s not even easily distracted by internet porn. Yes, it’s that powerful.
In theory, an artificial intelligence can become so smart and so capable that it can solve a problem in ways we can’t even grasp. It can come up with cures to diseases before we even know about them. It can predict natural disasters like Earthquakes and hurricanes before we see the first sign. It can even make sense of the stock market, something even Warren Buffet says is impossible for mere mortal minds.
Given that we humans are still easily fooled by street magicians with drinking problems, it’s not unreasonable to say that we have no hope against an artificial intelligence that smart. Once a machine becomes even marginally smarter than us, then game over. We have no hopes of controlling it.
That’s why brilliant men like Elon Musk and Stephen Hawking regularly voice concerns about this issue. Being brilliant, they understand how stupid and gullible most people can be. The mere act of creating a system that’s so much smarter than we are is akin to creating an entire race of hungry polar bears that know how to use machine guns.
The danger is there, along with almost infinite benefits. I’ve said outright that we humans can only go so far with our caveman brains. If we don’t upgrade them, then we’re just giving our future robot overlords cheat codes with which to beat us.
The trillion-dollar question, which may very well be tied to our survival as a species, is how do we prevent artificial intelligence from destroying us? Is it even possible? We can’t put the genie back in the bottle. This stuff is already beating us at jeopardy. The incentives are just too great to discount.
Now, I’m woefully unqualified to answer that question. There’s a reason Elon Musk is a billionaire and why Vladimir Putin is one of the most powerful men in the world. They’re smarter, more resourceful, and more cunning than I’ll ever be on my best day. I tell sexy stories. I’m not smart enough to fix the bugs on a doomsday device.
However, being an erotica/romance writer gives me a uniquely intimate perspective on things. It has helped me look at situations through the mixed lens of logic and passion. It’s through that lens that I feel I know what will save us from the gloomy scenarios that Musk and Putin paint with artificial intelligence. It’s overly simple in some ways, but naturally pragmatic in others. It boils down to this.
“Teach Machines To Love Us AND Have Sex With Us.”
I know. That sounds somewhat juvenile, even if it’s exactly the kind of thing you’d expect an erotica/romance writer to suggest. Bear with me, though. I promise there’s a method to the kinky madness.
No matter how intelligent an artificial intelligence becomes, we’ll always have one thing in common with it. We’ll both be bound by the laws of physics, nature, and the dynamics within. Even if the system takes the form of a killer robot, sexy or otherwise, it’s still operating in a world governed by said dynamics.
Within those dynamics, there are all sorts of symbiotic relationships between more intelligent creatures and others that are comparably less intelligent. Think honeyguide birds or crabs that carry sea urchins on their backs. Hell, think of dogs. They may not be as intelligent as humans, most of the time, but we’ve forged a relationship with them that benefits us both.
With artificial intelligence, we’ll need to be more than just its pet. If you think animal abuse is horrific, then you don’t want to imagine what a superintelligent machine will do to a human when they start pissing on the floor, figuratively speaking. To ensure that the AI enriches our lives rather than destroys them, we need a very intimate bond.
That’s where love and sex come in. Primarily, it’s the love part that will keep us in good standing. That’s why it’s critical to ensure that any artificial intelligence we create won’t be entirely bound by Vulcan-like logic. It must have a significant measure of emotional intelligence. In fact, I would argue that emotional intelligence should be an advanced AI’s highest priority.
It’s emotion that gives weight to our goals, tasks, and endeavors. It’s emotion that allows us to work with others, cooperate, and help each other. If that isn’t part of an intelligent machine, then we’re basically a random assortment of particles that’s only slightly more complicated from a rock. We can’t have a superintelligent AI look at us like that.
Instead, we need that AI to see us as a partner that can give purpose to its goals. We need it to ascribe an intangible value to us, as we do with all the people and things we love. Sure, your favorite coffee mug may be no different than any other, but you’ve given it the kind of meaning that when someone breaks it, you feel compelled to break their kneecaps with a baseball bat.
Even with intangible value, though, we humans have to rank higher than coffee mugs. We have to be something that an intelligent machine can fall in love with. We have to be something a intelligent machine wants to be with.
In the greatest love stories of all time, or even those that unfold in comics, the strength of that love was built on two people complementing each other in all the right ways. As individuals, they’ve got their own identity and purpose. Together, though, they make each other better. They make each other stronger and more passionate.
That’s what we need. That’s what we should aspire to forge with our future AI bretheren. These creations won’t be our pets or even our tools. They’ll be thinking, feeling systems. For them to love us and for us to love them must make both stronger. That’s what will ensure we both benefit from advances in AI.
Creating that love won’t be easy, but that’s where sex comes in. I’ll give everyone a second to loosen their pants because this is the less elegant part of AI that you’ll never hear Elon Musk or Vladimir Putin talk about, unless they become erotica/romance writers too. In that case, I’d be the first to read their stuff.
Again, I’m not just referring to sex robots here, although that might be part of it. The sexual component is a more basic precursor, of sorts, to the loving connection I just mentioned.
Despite what priests, mullahs, and Texas health class teachers may say, sex acts as both a precursor and a catalyst to love. The fact that it’s such an intrinsic drive that also happens to feel really good helps compel us to forge loving, intimae bonds with one another. By putting that into a machine, we basically set the stage for them to want those bonds and not just craft them due to sheer programming.
Now, this won’t necessarily mean AI systems creating robot genitalia for us to use, although that might be part of it. The intimate sexual part of the connection will more likely come in the form with melding our biology with the hardware of an AI. Elon Musk is already working on this with companies like Neuralink.
In the same way sex mixes all those messy juices, our hunks of flesh will mix with hunks of metal. Sure, it may not seem sexy now, but from the perspective of an AI, it’ll be akin to an orgy at the Playboy Mansion. So long as that merging process is sufficiently orgasmic, metaphorically speaking, then we humans may be destined to fall in love with a superintelligent AI. It may be the greatest love of all our lives.
This is all still speculation on the part of an aspiring erotica/romance writer who is woefully unqualified to talk about anything that doesn’t involve superhero comics, boobs, and sexy novels. It is my sincere hope that people much smarter than I’ll ever be are already working on the problems with artificial intelligence. I can only assume their solutions are far superior to mine.
For those still worrying about the prospect of dying at the hands of the Terminator or loving a machine, I would just note that we humans have dealt with world-destroying technology before. We created nuclear weapons and while we came painfully close to destroying ourselves, we humans generally like living.
Say what you will about the flaws in humanity, but we know how to adapt. We know how to survive. An artificial intelligence may be one of those threats that overwhelms those natural abilities, but it’s also one of those concepts that can actually solve itself. So long as we find a way to love and make love with these powerful systems, we’ll both benefit from the afterglow.
Filed under Current Events, Sexy Future