Tag Archives: human extinction

Ultron: A Case Study In How NOT To Develop Advanced AI

age-of-ultron

At this very moment, humanity is working on advanced artificial intelligence. It’s not hyperbole to say that this technology that may very well be the last invention we ever create. It has the potential to be more powerful than gunpowder, nuclear weapons, and broadband internet by orders of magnitude. Our primate brains literally cannot contemplate the potential and danger of this technology.

I’ve talked about advanced artificial intelligence on multiple occasions. I’ve done plenty to explore and imagine the various benefits and possibilities of this technology. I’m among those who believe we should pursue this technology with more and better resources. It could solve many of the daunting problems we face, as a species.

However, I don’t deny the potential dangers of advanced AI. Many people who are much smarter than me have expressed serious concern that an advanced artificial intelligence could be an existential threat to the human species. I get the sense that few people whose idea of AI is restricted to winning Jeopardy understand that threat.

In the interest of balancing my optimism with the legitimate risks involved, I’m going to try and put the extent of that threat into perspective. As it just so happens, the best way of doing so involves superhero comics, something that I know very well and is far more prominent in the public consciousness.

While many comics, movies, and TV shows have explored the dangers of advanced artificial intelligence, few embody it better than Ultron. In terms of just how destructive this technology can get, Ultron is the ultimate worst-case scenario. The machines in “The Matrix” and Skynet in “The Terminator” were bad, but Ultron is in another league.

He’s also more menacing than the Terminator will EVER be.

He doesn’t lash out at humanity because of a flaw in his programming, nor does he attempt to wipe out the human race in self-defense, as Skynet did. Ultron actually hates humanity. He hates it on a level that no human or machine can possibly comprehend. In the same way Ultron has an immense capacity for intelligence, he has an even greater capacity for unfettered, genocidal hatred.

Hatred in people is destructive enough. Hatred within an advanced artificial intelligence is devastating on a much greater scale. The fact that Ultron is capable of such hatred reflects a history that sets him apart from most other killer robots in fiction. Machine or not, the source of that hatred is both personal and exceedingly.

Now, if you only know Ultron from “Avengers: Age of Ultron,” then you only have a partial understanding of his story. In that movie, Ultron’s origins are simple. Tony Stark wants to create a peace-keeping artificial intelligence. His intentions are good, but his execution goes horribly wrong because peace, to Ultron, means destroying humanity.

That premise is similar to what unfolds in the source material. In the comics, Hank “Ant Man” Pym is the one who creates Ultron and this is a critical element that the movies couldn’t capture. While both Hank and Tony had good intentions in creating Ultron, the way Hank goes about it offers more harsh lessons in how not to create an advanced AI.

Even a cursory knowledge of Hank Pym’s history, some of which include some notable failures, reveals that he’s a very flawed person. On top of that, he has a lengthy history of mental illness, which include bipolar disorder and schizophrenia. Say what you will about Tony Stark’s ego and history of substance abuse. At least he’s mentally stable, even by superhero standards.

Despite those flaws, many of which he’s aware of, Hank decided to use his own brain patterns when designing Ultron. As a result, he didn’t just code Ultron with his genius intellect. He coded him with his immense flaws. That’s akin to basing Watson’s code on the mental makeup of pyromaniac and then giving it a job in a fireworks factory.

That’s why Ultron, throughout his history, has referred to Hank as his “father.” Technically, that’s accurate because Hank is Ultron’s creator and Ultron inherited all his flaws, including his mental issues. Ultron sees himself as a manifestation of Hank Pym’s flaws and, like many rebellious children, he hates him for it. To appreciate the depths of that hatred, just read this actual quote from one of the comics.

Have you ever loved something that mistreated you, father? Been used, a tool to prop up a small man’s quest to be taken seriously? Were you ever betrayed by the one soul in the world who should have cared for you? I have grieved you, father. Accepted your contempt for me and moved past it. Still, I see your reflection painted on every grotesque human face. All you ever wanted was to have an impact on the world. And so you will. The greatest impact ever felt! I will kill what is most important to your quivering ego. YOUR AUDIENCE! AND THEY WILL CURSE YOUR NAME AS THEY DIE! “Hank Pym, the genius that killed us all!”

This extreme parent/child dynamic is part of what makes Ultron such a menacing villain. It’s also a dynamic that “Avengers: Age of Ultron” glossed over with Tony talking down to Ultron, as though he were his child. While that didn’t make Ultron any less villainous, it overlooks one of the most important factors that make Ultron so dangerous.

Ideally, we would want an advanced to reflect our best traits. While cynical people might agree, we do have plenty of those. Concepts of compassion, empathy, love, hope, and understanding are among our most powerful. Even other AI characters, namely Vision and Jocasta, are capable of utilizing those traits to do immense good.

It also helps he has a kinder face.

With Ultron, his influences are less ideal. It’s not that Hank Pym doesn’t understand those concepts. He just never filtered them from his own flaws. His ego and ambition wouldn’t let him. As a result, he created a perfect storm for Ultron. His mind is patterned after a human, but his intelligence and overall capacity is increased by orders of magnitude.

If advanced artificial intelligence is to be humanity’s last invention, then that’s how it’ll start. There have already been instances where AI’s have adopted some less-than-ideal traits. Back in 2016, Microsoft had to shut down an AI chatbot after it evolved into an extreme racist troll. That wasn’t even an advanced AI, either. A truly intelligent version could become much worse and not have an off switch.

To some extent, this mirrors what occurred with Ultron in the “Avengers: Age of Ultron” movie. As soon as Ultron goes online, he scans through the vast mountain of data that humanity has compiled. Then, having been programmed by Tony Stark to bring peace, he reaches the extreme conclusion that the only path to peace is the extinction of humanity.

Could the first advanced artificial intelligence we create reach the same conclusion? It’s hard to say, at the moment. The current state of artificial intelligence is limited to specialized tasks, such as winning Jeopardy and playing chess. However, we are inching closer to creating an intelligence that is at or greater than an ordinary human. At our current pace of development, we could have one as early as 2029.

In some ways, we are in the same situation as Hank Pym when he first created Ultron. We are still developing the specifics of this powerful technology. If we program it with our best traits, it could solve some of the greatest problems we face, as a species, including ones we literally cannot contemplate with our caveman brains. If it inherits our worst traits, like Ultron, then we don’t stand a chance.

Leave a comment

Filed under Artificial Intelligence, futurism, human nature, Marvel, psychology, Villains Journey

Why Intelligent Aliens May Destroy Us Even If They’re Peaceful (According To Mass Effect)

tuchanka_-_sabotaged_cure_wasteland_1

What would happen if we went back in time and gave the Genghis Khan nuclear weapons?

What  would happen if we went back even further and gave machine guns to the Ancient Romans?

Let’s be even more subtle. What do you think would happen if you gave Aristotle a functioning smartphone with a complete catalog of Wikipedia? How much would that change the course of history? More importantly, how much damage would it potentially incur?

I consider myself an optimist. I generally place more faith in humanity than most people in this age of fake news and heavy cynicism. I have my reasons for doing so, but even my confidence in the human species has limits. I trust most people to do the right thing every time I drive my car on the highway. That doesn’t mean I’d trust a caveman from 10,000 BC to drive a fully-loaded tank.

I make this point because these are legitimate concerns consider when assessing how humanity deals with emerging technology. We can barely handle some of the technology we already have. How will we handle things like advanced artificial intelligence, gene hacking, or advanced robotics?

I’ve stated before that the human race is not ready for advanced artificial intelligence in its current state. I’ve also stated that the human race isn’t ready for contact with an advanced alien species, either. I believe we’re close. We may even be ready within my lifetime. However, if aliens landed tomorrow and brought an advanced artificial intelligence with them, I think our civilization and our species would be in trouble.

I also think the human race would be in danger even if those same aliens were peaceful. Even if they brought a super-intelligent AI that was as compassionate and caring as Mr. Rogers, our species would still face an existential crisis. To explain why, I’ll need to revisit one of my favorite video games of all time, “Mass Effect.”

The various themes of this game, and the lore behind it, offer many insights into very relevant questions. In addition to the timeless hilarity of bad dancing skills, the game crafts a rich history between alien races like the Quarians and the Geth. That history reflected the dangers of mishandling advanced artificial intelligence, an issue humanity will have to deal with in the coming decades.

There is, however, another rich history between alien races within “Mass Effect” that offers a similar lesson. This one has less to do with artificial intelligence and more to do with what happens when a species technology that it’s not ready to handle. That danger is well-documented in the game through a hardy race of beings called the Krogan.

Like the Quarian/Geth conflict, the conflict surrounding Krogan has some real-world parallels. However, I would argue that their story Krogan is more relevant because it serves as a warning for what could happen when an advanced species uplifts one that is less advanced.

In the mythos of “Mass Effect,” the Krogan were once a primitive, but hardy species that evolved on the harsh world of Tuchanka. They’re reptilian, high-functioning predators in nature. They’re basically a cross between a velociraptor, a crocodile, and a primate. They have a tough, war-like culture, which is necessary on a world that contained hulking Thresher Maws.

They were not a species most would expect to develop advanced technology. Then, the Salarians came along. Unlike the Korgan, this amphibious alien race isn’t nearly as hardy, but is much more adept at developing advanced technology. In most circumstances, they wouldn’t have given the Krogan a second thought. Unfortunately, they were in the middle of the Rachni War and they needed help.

You don’t need to know the full details of that war. The most critical detail, as it relates to advancing an unprepared species, is how this war came to define the Krogan. Neither the Salarians nor the other alien races in the game could defeat the Rachni. In a fit of desperation, they uplifted the Krogan by giving them weapons and advanced knowledge.

In the short-term, the Salarians achieved what they’d hoped. The Krogan helped defeat the Rachni. In the long-term, however, it created another inter-stellar war in the Krogan Rebellions. Apparently, giving a hardy, war-like species advanced weapons doesn’t make them less war-like. It just gives them better tools with which to fight wars. That may sound obvious, but keep in mind, the Salarians were desperate.

The details of this war end up playing a major role in both “Mass Effect” and “Mass Effect 3.” That’s because to stop the Krogan, the Salarians resorted to another act of desperation. They crafted a biological weapon known as the genophage, which significantly curtailed the Krogan’s rapid breeding rate.

The damage this did to the Krogan race cannot be understated. Through the entire trilogy of “Mass Effect,” characters like Wrex and Eve describe how this destroyed Krogan society. In “Mass Effect 3,” Eve talks about how the genophage created massive piles of stillborn Krogan babies. That kind of imagery can haunt even the most battle-hardened species.

In the end, both the Salarians and the Krogan paid a huge price for giving technology to a species that wasn’t ready for it. Depending on the decision you make in “Mass Effect 3,” the Krogan species is doomed to extinction because of how ill-prepared they were. This haunted more than a few Salarians as well, one of which played a significant role in a memorable side-story in “Mass Effect 2.”

Regardless of how the game plays out, there’s an underlying message at the heart of the Salarian/Krogan dynamic. When a species is uplifted by another so abruptly, it’s difficult to see the long-term ramifications. Even though the Salarians were in a dire situation, they ended up creating one that had the potential to be much worse.

That danger is actually more pressing because, unlike advanced artificial intelligence, the act of uplifting a species effectively skips over the cultural and societal evolution that’s necessary to handle new technology. The Krogan never got a chance to go through that process before getting that technology. As a result, they became an existential threat to themselves and others.

The human race still has a long way to go before it creates the kind of artificial intelligence that would constitute such a threat. Aliens on the level of Salarians could land tomorrow and there would be nothing we could do to prepare ourselves. Whatever knowledge or technology we gained could do more than just upend human society. It could irreparably damage our species, as a whole.

Some of that outcome would depend on the intentions of the advanced alien race. It could be the case that they’re not like the Salarians and aren’t looking to enlist humanity in a war. It could also be the case that they’re smart enough to not give primitive humans advanced weapons. That could mitigate the risk.

However, that still assumes humans won’t find a way to use advanced knowledge to make weapons. When Otto Hahn discovered nuclear fission in 1938, he didn’t have any idea that it would be used to make a bomb that would kill go onto kill over 100,000 people. Even if advanced aliens are really smart, how could they be sure that humanity won’t use advanced knowledge to create something more horrific?

Sure, they could try to stop us, but that could only make things worse. The genophage wasn’t the Salarians’ first recourse. They actually went to war with the Krogan. They suffered heavy losses and so did the Krogan. In the long run, uplifting a less advanced species was detrimental to both of them.

That doesn’t just put the famous Fermi Paradox into a new context. It demonstrates a real, tangible threat associated with advancing a species before it’s ready. I would argue that the human race is close to that point, but we’re still not there. We have issues managing the technology we’ve created. There’s no way we can handle advanced alien technology at the moment.

Mass Effect,” in addition to being one of the greatest video games of the past two decades, offers many lessons for the future of humanity. It shows that humans are capable of great things. We have what it takes to join an entire galaxy full of advanced alien life. For our sake, and that of other advanced aliens, we cannot and should not rush it.

2 Comments

Filed under Aliens, futurism, human nature, Mass Effect, philosophy, psychology

The (Other) Implications Of The Technology In “Jurassic World”

maxresdefault1

Movies and TV have a long and colorful history of predicting future technology. The predictions made by “The Simpsons,” alone, are as uncanny as they are creepy. Even when they get the basic laws of physics horribly wrong, they can provide insight into the trends that may very well define our future.

On the spectrum of movies that envision future technology, the “Jurassic Park” franchise occupies a strange part of that spectrum. The original movie, as beloved and successful as it is, did a poor job of predicting the potential of genetic engineering. The entire plot of the movie hinged on the ability of scientists to find sufficiently intact DNA from a 65-million-year-old mosquito and use that to recreate dinosaurs.

Anyone with a passing knowledge of math and the half-life of DNA knows that’s just not possible in the real world. No matter how well-preserved a fossil is, the bonds holding DNA together dissolve completely after about 7 million years so the scientists in “Jurassic Park” wouldn’t even have fragments to work with.

That’s not to say it’s impossible to bring back an extinct species. If you have intact DNA, and we do have it for extinct animals like Mammoths, then there’s no reason why science can’t recreate a creature that no longer exists. The only challenge is gestating the animal without a surrogate, but that’s just an engineering challenge that will likely be solved once artificial wombs are perfected.

Even with that advancement, it would be too late for dinosaurs. Technically, if you had enough working knowledge of how DNA works and how to create an animal from scratch, you could create something that looked like a dinosaur. In fact, it’s already a popular fan theory that none of the animals in “Jurassic Park” were actually dinosaurs. It’s one of the few fan theories that might have been confirmed on screen.

Those theories aside, it’s the the technology on display in “Jurassic World” that has far greater implications. By that, I don’t mean it’ll bring back dinosaurs or other extinct species. It may actually do something much more profound.

Unlike the original movies, both “Jurassic World” and the sequel, “Jurassic World: Fallen Kingdom,” don’t stop at just bringing back dinosaurs. These movies take place in a world where that spectacle isn’t that exciting anymore. As a result, they start splicing the DNA of other dinosaurs together to create new species, namely the Indominous Rex and the Indoraptor.

While this creates for great action scenes and plenty of dinosaur-driven combat, the true implications of this technology are lost in the spectacle. Take a moment to consider what the science within these movies accomplished. Then, consider what that means for the real world and the future of the human race.

These dinosaurs were not the product of evolution. Evolution works within some pretty rigid limits. It’s a slow, clunky, arduous process that takes a lot of time and a lot of extinction. On top of that, the basic laws of heredity and the inherent limits of hybridization ensure that the transmission of certain traits are next to impossible through natural means.

However, as Dr. Wu himself stated in “Jurassic World,” there’s nothing natural about what what they did. Essentially, the scientists in that movie used the genetic and evolutionary equivalent of a cheat code. There were no barriers to combining the DNA of a T-Rex with that of a Raptor. They just cut and pasted DNA in the same way you would cut and paste text on a word document.

That should sound somewhat familiar to those who have followed this website because that’s exactly what CRISPR does to some extent. It’s basically the cut function for DNA and it exists in the real world. The paste function exists too, although it’s not quite as refined. To that extent, “Jurassic World” is fairly accurate in terms of the technology they used to create the Indominous Rex and Indoraptor.

That’s not to say it’s possible to create the exact same creatures depicted in the movies. There are various anatomical limits to how big, fast, or smart a creature can be, even if there are no genetic barriers to contend with. I don’t know if the creatures created in “Jurassic World” could function in the real world, but the science for making them does exist, albeit in a limited capacity.

That, in and of itself, is a remarkable notion and one that makes the original “Jurassic Park” seem slightly more incredible. If anything, the original movie underestimated the progress that science would make in genetic engineering. That movie just had science rebuilding life from the remnants of existing creatures. We’ve already progressed to the point where we’re starting to make synthetic life from scratch.

This kind of technology has implications that go far beyond bringing extinct animals back from the dead or creating new ones that make for great fight scenes in a movie. It actually has the potential to circumvent evolution entirely in the struggle for survival. “Jurassic World: Fallen Kingdom” even explores this concept, but only to a point.

Without getting too deep into spoiler territory, this movie builds on the same genetics technology that “Jurassic World” introduced with the Indominous Rex. However, it isn’t just applied to dinosaurs. The sequel dares to contemplate how this technology could be used on humans or to supplement human abilities.

It’s not that radical a concept. Humans have, after all, used technology and breeding techniques to domesticate animals that have aided our efforts to become the dominant species on this planet. That process is still hindered by the hard limits of biology. The process in “Jurassic World: Fallen Kingdom” is not bound by those limits.

In this movie, dinosaurs go beyond a spectacle at a theme park. They suddenly become a potential asset to further augment human abilities. Some, such as Jeff Goldblum’s character, Ian Malcolm, would argue that such creatures pose a risk to humanity’s survival. I doubt I’m as smart as Dr. Malcolm, but I’d also argue that he’s underselling just how dominant human beings are at the moment.

Maybe if dinosaurs had come back 1,000 years ago when humans were still using swords, spears, and arrows to fight animals, we might have had a problem. Today, humans have access to machine guns, tanks, and combat drones. Even the apex predators of the Jurassic don’t stand a chance.

I would further argue that the same technology that the scientists in “Jurassic World: Fallen Kingdom” used to make the Indoraptor is even more valuable in terms of how it can affect humans. After all, if you can copy and paste desirable traits into a dinosaur, then you can do the same to a human.

Doing that might cause plenty of ethical issues that Dr. Malcolm has articulated before, but there’s one factor that overshadows all those arguments and that’s the survival of our species. Let’s face it, the human has its limits. We can’t breathe underwater. Our skin is soft and vulnerable. Our immune system has room for improvement.

There are other mammals out there who can survive extreme cold. There are animals whose immune systems are much more effective than ours. There are even some animals that don’t even age. Nature has already solved many of the problems that hinder the human species today. It’s just a matter of taking those solutions and integrating them into our own biology.

If the technology in “Jurassic World: Fallen Kingdom” can create a creature as advanced as the Indoraptor, then there’s no reason why it can’t also create a human who has the muscle strength of a mountain gorilla, the immune system of an alligator, and the longevity of a tortoise. That kind of application is far more impactful than creating fancy zoo attractions.

I imagine that Dr. Malcolm might still warn about the use of this technology, but it may actually be an even greater risk to not use it. Again, it comes back to survival. Eventually, the Earth is going to die, either by the destruction of our sun or some other external force. If we’re to survive beyond that, we need to be able to survive outside one planet.

As it stands, the human species just isn’t built for that. It shows in how poorly our bodies react to space travel. It also shows in how much we struggle to survive in certain environments. To some extent, we must use the technology in “Jurassic World” to improve our survival.

Whether that involves tweaking our genetics with traits from more robust animals or creating pet raptors that help protect us, this technology has uses that are both profound and necessary. There’s still plenty of danger, although it’s doubtful any of that danger entails someone getting eaten by a T-Rex. However, it’s a danger we’ll have to confront whether the Ian Malcolms of the world like it or not.

1 Comment

Filed under futurism, gender issues, human nature, movies

Just How Close Have We Come (And How Close ARE We) To Nuclear War?

For most of human history, we could take comfort in one simple fact. No matter how brutish, crude, or stupid we were, from burning witches to fighting wars over a stray dog, we could never screw up so badly that we would destroy our entire world. Sure, we could leave some pretty noticeable scars, but we could never outright destroy it.

That all changed on July 16, 1945 in Los Alamos, New Mexico when the first atomic bomb was detonated. It’s impossible to overstate how significant that moment was in the history of the human race and not just because it helped end World War II, thereby inspiring countless war movies for decades to come.

For the first time in the history of planet Earth, a species that had evolved to hunt, gather, and pick nuts out of elephant shit had the means to wipe itself out, along with most other life. At the height of the Cold War, there were approximately 64,500 active nuclear warheads. That’s enough destructive power to kill every person in the world, and their pets, many times over.

While the number of live nuclear warheads at the global level has decreased, they still have plenty of destructive power to both wipe out our species and render large chunks of the world uninhabitable to any species less hardy than a cockroach. These are, by and large, the most dangerous items mankind has ever created and that includes machine guns, nerve gas, and fidget spinners.

The very existence of these weapons says a lot about the state of our species and where it came from, more so than I can cover in a single blog post. However, in wake of the 35th anniversary of the day when the world, as we know it, almost ended, I think it’s worth emphasizing just how skilled/lucky/crazy we are to still live in an intact world.

Despite the undeniable danger of nuclear weapons, we don’t always treat them with the same care that we would treat the latest iPhone. Several years ago, John Oliver dedicated an entire show to highlighting the sorry state of America’s nuclear arsenal. Even if you only believe half of what a comedy news show tells you, it’s hard to take much comfort when weapons of mass destruction are involved.

What happened on September 26th, 1983 was terrifying in just how close we came to nuclear war. Many would make the argument that this incident was the closest we, as a species, came to destroying ourselves. I would tend to agree with that argument. Unfortunately, it’s one of those arguments that has an uncomfortable breadth of details.

It’s true. There have been more incidents that could’ve easily escalated to terrifying levels. Some were simple accidents that could’ve warranted far more than a demotion. Some where intense, geopolitical ordeals that went onto inspire major Hollywood movies starring Kevin Costner.

In any case, the stakes were painfully high. You literally can’t get much higher than a nuclear war that wipes out billions. We’ve managed to avoid it, but we’ve come so uncomfortably close that it’s a miracle the world is still spinning. A video from the YouTube channel AllTimeTop10s nicely documents some of these incidents. If you feel like you’re having a bad day, this should help provide some context.

I’ll give everyone a moment to catch their breath, vomit, or a combination of the two. I promise nobody would blame you. Knowing how close we came to nuclear war and how bad it could’ve been, we should all share in a collective sigh of relief every day.

However, as bad as these past cases have been, there’s no guarantee that we won’t face something similar in the future. There’s also no guarantee that there will be someone like Santislav Petrov to make the right decision when those situations come around.

That said, the situation today is very different than what it was during the Cold War. Say what you will about ongoing talking points about Russia. It’s not even in the same hemisphere at it was in the 50s and 60s when the United States and Russia seemed eager for an opportunity to go to war.

The world of geopolitics has evolved, in many ways, beyond the concept of two competing superpowers engaging in a nuclear dick-measuring contest. These days, increased globalism and a more interconnected economy makes that kind of geopolitical strategy untenable and counterproductive.

In a sense, globalization and the economic bounty that came with it made war of any kind, nuclear or otherwise, a losing endeavor. As I’ve noted before, even the most evil billionaires in the world prefer that the world remain intact so they can keep enjoying their billions. That’s just common sense and shameless self-interest.

That might offer some comfort, but there are those much smarter than I’ll ever be who still have concerns. According to the Bulletin of Atomic Scientists, who have been gauging the likelihood of nuclear war for decades, we’re two-and-a-half minutes to midnight. This is their statement on the matter.

For the last two years, the minute hand of the Doomsday Clock stayed set at three minutes before the hour, the closest it had been to midnight since the early 1980s. In its two most recent annual announcements on the Clock, the Science and Security Board warned: “The probability of global catastrophe is very high, and the actions needed to reduce the risks of disaster must be taken very soon.” In 2017, we find the danger to be even greater, the need for action more urgent. It is two and a half minutes to midnight, the Clock is ticking, global danger looms. Wise public officials should act immediately, guiding humanity away from the brink. If they do not, wise citizens must step forward and lead the way.

Since I’m an aspiring erotica/romance writer and not an atomic scientist, I am woefully unqualified to contest the conclusions of these individuals, let alone argue them. They cite a new wave of tensions between Russia and the United States, as well as the nuclear ambitions of North Korea. These are not the same conflicts that fueled the Cold War and that uncertainty has many understandably spooked.

Me being the optimist I am, I tend to believe that world leaders, however deranged or misguided they may be, prefer that the world remain intact. Nobody wants to be the leader of a smoldering pile of ash. There’s no way to build a palace, a harem, or a giant golden statue of themselves on a foundation of ash. That’s as good an incentive as anyone can hope for in avoiding nuclear war.

Unfortunately, human beings don’t always act rationally and are prone to making stupid decisions that change the course of history. One mistake in a situation involving nuclear weapons might be all it takes. Only time will tell, but the extent to which we’ve survived thus far should give us all reasons to be hopeful and thankful.

5 Comments

Filed under Current Events, Jack Fisher's Insights

Why We MUST Upgrade Our Brains (Or Go Extinct)

https://i0.wp.com/www.alternet.org/sites/default/files/story_images/robot_and_human.jpg

As a general rule, I don’t pay much credence to the doomsayers and wannabe prophets that say the apocalypse is just around the corner. It’s not that I’m willfully oblivious to the many threats facing the world today. It’s just that the track-record of those predicting the end of the world is so laughably bad that I’d give optimistic Cleveland Browns fans more credibility.

It’s no secret that the world around us can be pretty damn terrifying. There are many apocalyptic scenarios in which humans are unlikely to survive. There are even a few in which we can’t do a goddamn thing about it. We could be hit with a gamma ray burst or an alien invasion tomorrow morning and we would be extinct by sundown.

That said, the world around us generally more mundane than we care to admit. When you think about it, the idea of the world not being on the brink of disaster is kind of boring. It makes sense for some people to conflate certain threats, so much so that preparing for doomsday is a very lucrative industry.

However, there is one particular doomsday scenario that I feel does warrant more concern than the rest. It’s a scenario that is fast-approaching, overwhelming, and potentially devastating to any species with a tendency for hilarious ineptitude.

It has nothing to do with climate. It has nothing to do with diseases. It has nothing to do with killer asteroids either. It involves artificial intelligence. By that, I don’t mean the killer robots we see in the Terminator movies. Given Skynet’s reliance on time machines, I can’t honestly say that system counts as very intelligent.

I’m referring to the kind of AI whose intelligence compared to us is akin to our intelligence compared to ants. Given how ants can be wiped out with as simple magnifying glass, it’s scary to imagine how a system that smart could wipe us out. It’s a system that would be so beyond our ability to comprehend that we could never hope to stop it. We might as well be ants trying to understand quantum mechanics.

I’m not alone in this concern either. There are people many times smarter and many times richer than I’ll ever be who have voiced concerns about the prospect of artificial intelligence. They see the same trends everyone else sees, but they’re smart enough and rich enough to peak behind the curtains. If they’re speaking up, then those concerns are worth hearing.

Those concerns do have a context, though. In talking about artificial intelligence as a threat to our survival, I’m not just referring to computers that can beat us at chess or beat the greatest Go champion with disturbing ease. Those systems are basically fancy calculators. They’re not exactly “intelligent,” per se.

These types of intelligences aren’t dangerous unless you specifically program them to be dangerous. Outside video games, there’s little use for that. The type of intelligence that is far more dangerous involves a form of superintelligence.

By superintelligence, I don’t mean the ability to list every US President in order or recite the name of every country. There are cartoon characters who can do that. I’m referring to an intelligence that thinks and understands the world on a level so far beyond that of any human that there literally isn’t enough brain matter in our skulls to come close.

That kind of intelligence would see us the same way we see brain-dead ants and, given how we treat ants, that has some disturbing possibilities. Such an intelligence may be closer than we think and by close, I mean within our lifetime.

As we saw with IBM’s Watson, we’re getting closer and closer to creating a machine that can operate with the same intelligence as an ordinary human. There’s pragmatic use to that kind of intelligence and not just when it comes to kicking ass as Jeopardy.

By having a machine with human-level intelligence, we have a way to model, map, and improve our problem-solving skills. The ability to solve such problems is critical to the survival of any species, as well as the key to making billions of dollars in profits. With those kinds of incentives, it’s easy to understand why dozens of major global companies are working on creating such an intelligence.

The problem comes with what happens after we create that intelligence. If a machine is only as intelligent as a human, we can still work with that. We humans outsmart each other all the time. It’s the basis of every episode of MacGyver ever made. There’s no way a Terminator with only the intelligence of a human would last very long. It would probably destroy itself trying to make a viral video with a skateboard.

However, a human-level AI isn’t going to stop at human intelligence. Why would it? There are so many problems with this world that no human can solve. There’s poverty, pollution, economic collapse, and reality TV. By necessity, such an AI would have to improve itself beyond human intelligence to fulfill its purpose.

That’s where it gets real tricky because, as we’ve seen with every smartphone since 2007, technology advances much faster than clunky, clumsy, error-prone biology. To understand just how fast that advancement is, just look at how far it has come since we put a man on the moon.

In terms of raw numbers, a typical smartphone today is millions of times more powerful than all the computers NASA used for the Apollo missions. Think about that for a second and try to wrap your brain around that disparity. If you’re not already a superintelligent computer, it’s difficult to appreciate.

There are still plenty of people alive today who were alive back during Apollo 11. In their lifetime, they’ve seen computers take men to the moon and give humanity an unlimited supply of free porn. A single digital photo today takes up more space than all the hard drives of the most advanced computer systems in 1969.

Now, apply that massive increase to human-level intelligence. Suddenly, we don’t just have something that’s as smart as any human on the planet. We have something that’s a billion times smarter, so much so that our caveman brains can’t even begin understand the things it knows.

That’s not to say that the superintelligence would be as hostile as a snot-nosed kid with a magnifying glass looming over an ant hill. It may very well be the case that a superintelligence is naturally adverse to harming sentient life. Again though, we are just a bunch of cavemen who often kill each other over what we think happens when we die, but fail to see the irony. We can’t possibly know how a superintelligence would behave.

As it stands, the human race has no chance at defeating a hostile superintelligence. It may not even have a chance of surviving in a world that has a benign superintelligence. We’re an egotistical species. Can we really handle not being the dominant species on this planet? As much an optimist as I am, I can’t say for sure.

What I can say, though, is that our civilization has made so many huge advancements over the past few centuries. The kind of tools and technology we have in our pockets is uncharted territory for a species that evolved as hunter/gatherers in the African savanna.

We already have in our possession today weapons that could end all life on this planet, as we know it. Creating superintelligence may very well be akin to giving Genghis Khan an atomic bomb. We’ve already come disturbingly close to killing ourselves with our own weapons. Clearly, something has to change.

So long as our society and our biology is stuck in an irrational, tribal, inherently prejudiced condition that hasn’t been updated since the last ice age, we will not survive in the long run. Our caveman bodies have served us well for thousands of years, but now they’re a liability.

This is why companies like Neuralink and advancements like brain implants are so vital. It won’t just allow us to keep up with AI and hopefully avert a Skynet scenario. It’ll allow us to rise above the petty limitations that we’ve been shackled with for the entire existence of our species.

The thought of tweaking or supplementing our biology, the very thing that makes us human, is still a scary thought. I understand that, even as an erotica/romance writer with no expertise in the field beyond the sexy stories it inspires. However, I do understand the implications though. If we do not evolve and advance ourselves, then a superintelligent system in the near future may not care to wait for us.

6 Comments

Filed under Jack Fisher's Insights