Tag Archives: extinction

Just How Close Have We Come (And How Close ARE We) To Nuclear War?

For most of human history, we could take comfort in one simple fact. No matter how brutish, crude, or stupid we were, from burning witches to fighting wars over a stray dog, we could never screw up so badly that we would destroy our entire world. Sure, we could leave some pretty noticeable scars, but we could never outright destroy it.

That all changed on July 16, 1945 in Los Alamos, New Mexico when the first atomic bomb was detonated. It’s impossible to overstate how significant that moment was in the history of the human race and not just because it helped end World War II, thereby inspiring countless war movies for decades to come.

For the first time in the history of planet Earth, a species that had evolved to hunt, gather, and pick nuts out of elephant shit had the means to wipe itself out, along with most other life. At the height of the Cold War, there were approximately 64,500 active nuclear warheads. That’s enough destructive power to kill every person in the world, and their pets, many times over.

While the number of live nuclear warheads at the global level has decreased, they still have plenty of destructive power to both wipe out our species and render large chunks of the world uninhabitable to any species less hardy than a cockroach. These are, by and large, the most dangerous items mankind has ever created and that includes machine guns, nerve gas, and fidget spinners.

The very existence of these weapons says a lot about the state of our species and where it came from, more so than I can cover in a single blog post. However, in wake of the 35th anniversary of the day when the world, as we know it, almost ended, I think it’s worth emphasizing just how skilled/lucky/crazy we are to still live in an intact world.

Despite the undeniable danger of nuclear weapons, we don’t always treat them with the same care that we would treat the latest iPhone. Several years ago, John Oliver dedicated an entire show to highlighting the sorry state of America’s nuclear arsenal. Even if you only believe half of what a comedy news show tells you, it’s hard to take much comfort when weapons of mass destruction are involved.

What happened on September 26th, 1983 was terrifying in just how close we came to nuclear war. Many would make the argument that this incident was the closest we, as a species, came to destroying ourselves. I would tend to agree with that argument. Unfortunately, it’s one of those arguments that has an uncomfortable breadth of details.

It’s true. There have been more incidents that could’ve easily escalated to terrifying levels. Some were simple accidents that could’ve warranted far more than a demotion. Some where intense, geopolitical ordeals that went onto inspire major Hollywood movies starring Kevin Costner.

In any case, the stakes were painfully high. You literally can’t get much higher than a nuclear war that wipes out billions. We’ve managed to avoid it, but we’ve come so uncomfortably close that it’s a miracle the world is still spinning. A video from the YouTube channel AllTimeTop10s nicely documents some of these incidents. If you feel like you’re having a bad day, this should help provide some context.

I’ll give everyone a moment to catch their breath, vomit, or a combination of the two. I promise nobody would blame you. Knowing how close we came to nuclear war and how bad it could’ve been, we should all share in a collective sigh of relief every day.

However, as bad as these past cases have been, there’s no guarantee that we won’t face something similar in the future. There’s also no guarantee that there will be someone like Santislav Petrov to make the right decision when those situations come around.

That said, the situation today is very different than what it was during the Cold War. Say what you will about ongoing talking points about Russia. It’s not even in the same hemisphere at it was in the 50s and 60s when the United States and Russia seemed eager for an opportunity to go to war.

The world of geopolitics has evolved, in many ways, beyond the concept of two competing superpowers engaging in a nuclear dick-measuring contest. These days, increased globalism and a more interconnected economy makes that kind of geopolitical strategy untenable and counterproductive.

In a sense, globalization and the economic bounty that came with it made war of any kind, nuclear or otherwise, a losing endeavor. As I’ve noted before, even the most evil billionaires in the world prefer that the world remain intact so they can keep enjoying their billions. That’s just common sense and shameless self-interest.

That might offer some comfort, but there are those much smarter than I’ll ever be who still have concerns. According to the Bulletin of Atomic Scientists, who have been gauging the likelihood of nuclear war for decades, we’re two-and-a-half minutes to midnight. This is their statement on the matter.

For the last two years, the minute hand of the Doomsday Clock stayed set at three minutes before the hour, the closest it had been to midnight since the early 1980s. In its two most recent annual announcements on the Clock, the Science and Security Board warned: “The probability of global catastrophe is very high, and the actions needed to reduce the risks of disaster must be taken very soon.” In 2017, we find the danger to be even greater, the need for action more urgent. It is two and a half minutes to midnight, the Clock is ticking, global danger looms. Wise public officials should act immediately, guiding humanity away from the brink. If they do not, wise citizens must step forward and lead the way.

Since I’m an aspiring erotica/romance writer and not an atomic scientist, I am woefully unqualified to contest the conclusions of these individuals, let alone argue them. They cite a new wave of tensions between Russia and the United States, as well as the nuclear ambitions of North Korea. These are not the same conflicts that fueled the Cold War and that uncertainty has many understandably spooked.

Me being the optimist I am, I tend to believe that world leaders, however deranged or misguided they may be, prefer that the world remain intact. Nobody wants to be the leader of a smoldering pile of ash. There’s no way to build a palace, a harem, or a giant golden statue of themselves on a foundation of ash. That’s as good an incentive as anyone can hope for in avoiding nuclear war.

Unfortunately, human beings don’t always act rationally and are prone to making stupid decisions that change the course of history. One mistake in a situation involving nuclear weapons might be all it takes. Only time will tell, but the extent to which we’ve survived thus far should give us all reasons to be hopeful and thankful.

7 Comments

Filed under Current Events, Jack Fisher's Insights

Why We MUST Upgrade Our Brains (Or Go Extinct)

https://i2.wp.com/www.alternet.org/sites/default/files/story_images/robot_and_human.jpg

As a general rule, I don’t pay much credence to the doomsayers and wannabe prophets that say the apocalypse is just around the corner. It’s not that I’m willfully oblivious to the many threats facing the world today. It’s just that the track-record of those predicting the end of the world is so laughably bad that I’d give optimistic Cleveland Browns fans more credibility.

It’s no secret that the world around us can be pretty damn terrifying. There are many apocalyptic scenarios in which humans are unlikely to survive. There are even a few in which we can’t do a goddamn thing about it. We could be hit with a gamma ray burst or an alien invasion tomorrow morning and we would be extinct by sundown.

That said, the world around us generally more mundane than we care to admit. When you think about it, the idea of the world not being on the brink of disaster is kind of boring. It makes sense for some people to conflate certain threats, so much so that preparing for doomsday is a very lucrative industry.

However, there is one particular doomsday scenario that I feel does warrant more concern than the rest. It’s a scenario that is fast-approaching, overwhelming, and potentially devastating to any species with a tendency for hilarious ineptitude.

It has nothing to do with climate. It has nothing to do with diseases. It has nothing to do with killer asteroids either. It involves artificial intelligence. By that, I don’t mean the killer robots we see in the Terminator movies. Given Skynet’s reliance on time machines, I can’t honestly say that system counts as very intelligent.

I’m referring to the kind of AI whose intelligence compared to us is akin to our intelligence compared to ants. Given how ants can be wiped out with as simple magnifying glass, it’s scary to imagine how a system that smart could wipe us out. It’s a system that would be so beyond our ability to comprehend that we could never hope to stop it. We might as well be ants trying to understand quantum mechanics.

I’m not alone in this concern either. There are people many times smarter and many times richer than I’ll ever be who have voiced concerns about the prospect of artificial intelligence. They see the same trends everyone else sees, but they’re smart enough and rich enough to peak behind the curtains. If they’re speaking up, then those concerns are worth hearing.

Those concerns do have a context, though. In talking about artificial intelligence as a threat to our survival, I’m not just referring to computers that can beat us at chess or beat the greatest Go champion with disturbing ease. Those systems are basically fancy calculators. They’re not exactly “intelligent,” per se.

These types of intelligences aren’t dangerous unless you specifically program them to be dangerous. Outside video games, there’s little use for that. The type of intelligence that is far more dangerous involves a form of superintelligence.

By superintelligence, I don’t mean the ability to list every US President in order or recite the name of every country. There are cartoon characters who can do that. I’m referring to an intelligence that thinks and understands the world on a level so far beyond that of any human that there literally isn’t enough brain matter in our skulls to come close.

That kind of intelligence would see us the same way we see brain-dead ants and, given how we treat ants, that has some disturbing possibilities. Such an intelligence may be closer than we think and by close, I mean within our lifetime.

As we saw with IBM’s Watson, we’re getting closer and closer to creating a machine that can operate with the same intelligence as an ordinary human. There’s pragmatic use to that kind of intelligence and not just when it comes to kicking ass as Jeopardy.

By having a machine with human-level intelligence, we have a way to model, map, and improve our problem-solving skills. The ability to solve such problems is critical to the survival of any species, as well as the key to making billions of dollars in profits. With those kinds of incentives, it’s easy to understand why dozens of major global companies are working on creating such an intelligence.

The problem comes with what happens after we create that intelligence. If a machine is only as intelligent as a human, we can still work with that. We humans outsmart each other all the time. It’s the basis of every episode of MacGyver ever made. There’s no way a Terminator with only the intelligence of a human would last very long. It would probably destroy itself trying to make a viral video with a skateboard.

However, a human-level AI isn’t going to stop at human intelligence. Why would it? There are so many problems with this world that no human can solve. There’s poverty, pollution, economic collapse, and reality TV. By necessity, such an AI would have to improve itself beyond human intelligence to fulfill its purpose.

That’s where it gets real tricky because, as we’ve seen with every smartphone since 2007, technology advances much faster than clunky, clumsy, error-prone biology. To understand just how fast that advancement is, just look at how far it has come since we put a man on the moon.

In terms of raw numbers, a typical smartphone today is millions of times more powerful than all the computers NASA used for the Apollo missions. Think about that for a second and try to wrap your brain around that disparity. If you’re not already a superintelligent computer, it’s difficult to appreciate.

There are still plenty of people alive today who were alive back during Apollo 11. In their lifetime, they’ve seen computers take men to the moon and give humanity an unlimited supply of free porn. A single digital photo today takes up more space than all the hard drives of the most advanced computer systems in 1969.

Now, apply that massive increase to human-level intelligence. Suddenly, we don’t just have something that’s as smart as any human on the planet. We have something that’s a billion times smarter, so much so that our caveman brains can’t even begin understand the things it knows.

That’s not to say that the superintelligence would be as hostile as a snot-nosed kid with a magnifying glass looming over an ant hill. It may very well be the case that a superintelligence is naturally adverse to harming sentient life. Again though, we are just a bunch of cavemen who often kill each other over what we think happens when we die, but fail to see the irony. We can’t possibly know how a superintelligence would behave.

As it stands, the human race has no chance at defeating a hostile superintelligence. It may not even have a chance of surviving in a world that has a benign superintelligence. We’re an egotistical species. Can we really handle not being the dominant species on this planet? As much an optimist as I am, I can’t say for sure.

What I can say, though, is that our civilization has made so many huge advancements over the past few centuries. The kind of tools and technology we have in our pockets is uncharted territory for a species that evolved as hunter/gatherers in the African savanna.

We already have in our possession today weapons that could end all life on this planet, as we know it. Creating superintelligence may very well be akin to giving Genghis Khan an atomic bomb. We’ve already come disturbingly close to killing ourselves with our own weapons. Clearly, something has to change.

So long as our society and our biology is stuck in an irrational, tribal, inherently prejudiced condition that hasn’t been updated since the last ice age, we will not survive in the long run. Our caveman bodies have served us well for thousands of years, but now they’re a liability.

This is why companies like Neuralink and advancements like brain implants are so vital. It won’t just allow us to keep up with AI and hopefully avert a Skynet scenario. It’ll allow us to rise above the petty limitations that we’ve been shackled with for the entire existence of our species.

The thought of tweaking or supplementing our biology, the very thing that makes us human, is still a scary thought. I understand that, even as an erotica/romance writer with no expertise in the field beyond the sexy stories it inspires. However, I do understand the implications though. If we do not evolve and advance ourselves, then a superintelligent system in the near future may not care to wait for us.

6 Comments

Filed under Jack Fisher's Insights