Tag Archives: Nuclear Weapons

What Would You Do If A Nuclear Attack Was Immanent?

This is a video from my YouTube channel, Jack’s World.

Since the invention of nuclear weapons, the possibility of nuclear war has been a looming threat. Geopolitics has evolved, as has the source of tension and danger. But there’s no question that these weapons are devastating. And should they ever be used, the world will change.

But what would we do, as individuals, if we knew as a nuclear attack was immanent? We got a brief taste of that in 2018 with the false missile alert in Hawaii. Now, I’d like to offer my own take on this question on what I would do if such an attack was immanent. Enjoy!

Leave a comment

Filed under Current Events, Jack Fisher's Insights, Jack's World, YouTube

Celebrating Vasili Arkhipov: The Man Who Saved The World (And My Life)

This is a video from my YouTube channel, Jack’s World.

This video celebrates and honors the actions of Vasili Arkhipov, a man who saved the world during the darkest hours of the Cold War. It all unfolded during the Cuban Missile Crisis. The United States and the Soviet Union engaged in a high-stakes showdown over nuclear missiles that had been secretly stationed in Cuba. And at the most intense moment of the conflict, Vasili Arkhipov’s actions ensured the world was spared from nuclear war.

And, as a result, I owe this man my life. And many of those who view it likely owe their existence to him, as well. Enjoy!

Leave a comment

Filed under history, Jack's World, YouTube

Celebrating & Honoring Stanislav Petrov: The Man Who Saved My Life (And MANY Others)

This is a video from my YouTube channel, Jack’s World.

This video is my way of celebrating and honoring Stanislav Petrov, a man who I know for a fact saved my life, as well as the lives of many others. Few know his story or what happened during the events of September 26, 1983. On this day, the world as we knew it almost ended. The Soviet Union and the United States almost went to nuclear war.

But it was because of one man’s decision that this did not happen. Stanislav Petrov is the man who made that decision and for that, I am grateful. After watching this video, I hope others share in this sentiment. Enjoy!

Leave a comment

Filed under Current Events, Jack's World, politics, YouTube

CAN Humanity Effectively Govern Itself Video?

This is a video from my YouTube channel, Jack’s World.

This video explores the question of whether humanity can actually govern itself beyond a certain point. For most of human history, the stakes were local. But now, the stakes are global in a very literal sense. Between climate change and nuclear weapons, humanity has the ability to end itself and all life on Earth.

That makes the question of whether or not we can effectively govern ourselves incredibly relevant. And I try to explore that concept as best I can while leaving some room for speculation on both sides.

Leave a comment

Filed under Current Events, Jack's World, politics, YouTube

Thought Experiment: What Technology Can Humanity NOT Be Trusted With?

This is a video from my YouTube channel, Jack’s World.

This video is a thought experiment about dangerous technology. Over the centuries, humanity has developed numerous tools that have helped us survived. But some tools are more dangerous than others. There’s nuclear weapons, chemical weapons, and biologically engineered viruses. But these are all things we have now.

Future technology has the potential to be much more dangerous. So, I explore the idea of there being some technology that humanity just cannot be trusted with. Do you agree? Do you have other insights? Share your thoughts. They are always welcome.

Leave a comment

Filed under Artificial Intelligence, Jack's World, technology, Thought Experiment, YouTube

How Do We Regulate Artificial Intelligence? Seriously, How?

In general, I don’t pay much attention to doomsayers who claim the end is near and we should all cower, tremble, and give them our credit card number. Don’t get me wrong. I still believe there are serious existential threats facing humanity today. Some are worth taking seriously and dedicating resources to addressing. Others are not. Some simply require a more balanced perspective.

There’s a long list of failed apocalyptic predictions. The fact we’re surviving and thriving by most measures shows just how resilient, adaptable, and capable humanity is. There are some threats that I believe humanity will eventually resolve, thanks largely to our accelerating progress in science, technology, and logistics.

Others, however, have me more concerned. While some are more immediate than others, one in particular continues to confound me, as well as some of the smartest people in the world. It involves artificial intelligence, an emerging technology that’s as promising as it is unpredictable. Given the complexity of this technology, it’s difficult to explain in totality, but it can be best summed up by one simple question.

How do you regulate artificial intelligence?

That’s not a rhetorical question. It’s not a thought experiment either. It’s a serious, honest question that people far smarter and far more capable than me are just starting to ask.

Elon Musk is one of them. Very recently, he called for more regulation on artificial intelligence. That, alone, should be both telling and worrying. This man is a billionaire. Usually, billionaires are more inclined advocate removing regulations. Whenever they make an exception, that’s a sign they know it’s serious.

Even though Musk is one of the top advocates for solving big problems with technology, he still has concerns about the problems associated with artificial intelligence. In AI circles, it’s often called the control problem. It’s not a very creative name, but it gets the point across.

How do you control something that is potentially as smart, if not smarter than a human?

How do you manage something that thinks, adapts, and evolves faster than any machine or living thing?

How do you regulate an artificial intelligence that was built by humans, but isn’t at all human?

These are all difficult questions to contemplate, let alone legislate. Even Musk doesn’t provide specifics. Chances are he doesn’t know any more than the rest of the non-billionaire population. That’s a problem because if we’re going to try and regulate this technology, we need to understand it. On top of that, politicians and lawmakers have a long and embarrassing history of failing to understand technology.

However, this isn’t just about writing laws that protect citizens from being exploited by tech companies. Artificial intelligence, especially the kind that exceeds human intelligence, has capabilities that go beyond sending text messages from bathroom stalls. If handled improperly, it wouldn’t just be an existential threat. It could destroy humanity in ways we literally cannot contemplate.

Now, I try to be an optimist in most things involving emerging technology. Humanity has found a way to manage dangerous technology before, namely with nuclear weapons. However, artificial intelligence is a different beast entirely. Regulating it isn’t as easy as simply controlling the materials that make it. The very concept of regulating this technology lacks precedent.

The closest we have to date is Isaac Asimov’s famous three laws of robotics, which were introduced in 1942. Asimov was a brilliant writer and very ahead of his time on some concepts, but this is one issue where we need more than just three simple tenants. We need to think bigger and bolder. If we don’t, then an advanced artificial intelligence will quickly leave us behind.

After that, it won’t matter what kind of regulations we try to pass. It’ll be smart enough to circumvent them. That doesn’t mean humanity is doomed at that point, but we’ll be distressingly vulnerable. I know it’s in our nature to procrastinate on things we don’t see as vital, but if ever there was an issue to make an exception, this is it.

2 Comments

Filed under Artificial Intelligence, biotechnology, futurism, philosophy, technology, Thought Experiment

Why Biological Weapons Will Be A (MUCH) Bigger Threat In The Future

diabolical-biological-warfare

It wasn’t too long ago that the biggest existential threat facing humanity was nuclear war. I’ve noted before how distressingly close we’ve come to a nuclear disaster and how the threat of a nuclear holocaust is still present. However, that threat has abated in recent decades, especially as nuclear weapons have gotten so destructive that their use is somewhat redundant.

More recently, people have become more concerned about the threat posed by advanced artificial intelligence. The idea is that at some point, an AI will become so intelligent and capable that we won’t be able to stop it in the event it decides that humanity must go extinct. It’s the basis of every Terminator movie, as well as an Avengers movie.

While I certainly have my concerns about the dangers of advanced artificial intelligence, it’s not the threat that worries me most these days. We still have some measure of control over the development of AI and we’re in a good position to guide that technology down a path that won’t destroy the human race. The same cannot be said for biological weapons.

If there’s one true threat that worries me more with each passing day, it’s that. Biological weapons are one of those major threats that does not slip under the radar, as evidenced by plenty of movies, books, and TV shows. However, the extent of that threat has become more understated in recent years and has the potential to be something more powerful than nuclear weapons.

By powerful, I don’t necessarily mean deadlier. At the end of the day, nuclear weapons are still more capable of rendering the human race extinct and turning the whole planet into a radioactive wasteland. The true power of biological weapons less about how deadly they can be and more about how useful they could be to potential governments, tyrants, or extremists.

For most of human history, that power has been limited. There’s no question that disease has shaped the course of human history. Some plagues are so influential that they mark major turning points for entire continents. The same can be said for our ability to treat such diseases. However, all these diseases had one fatal flaw that kept them from wiping out the human race.

Thanks to the fundamental forces of evolution, a deadly pathogen can only be so deadly and still survive. After all, an organism’s ultimate goal isn’t to kill everything it encounters. It’s to survive and reproduce. It can’t do that if it kills a carrier too quickly. If it’s too benign, however, then the carrier’s immune system will wipe it out.

That’s why even diseases as deadly as Ebola and Influenza can only be so infectious. If they kill all their hosts, then they die with them. That’s why, much to the chagrin of creationists, evolution doesn’t favor the natural emergence of apocalyptic diseases. They can still devastate the human race, but they can’t necessarily wipe it out. It would only wipe itself out in the process and most lifeforms avoid that.

It’s also why the large-scale biological weapons programs of the 20th century could only be so effective. Even if a country manufactured enough doses of an existing disease to infect every person on the planet, it won’t necessarily be deadly enough to kill everyone. Even at its worst, smallpox and bubonic plague never killed more than two-thirds of those it infected.

That’s not even factoring in how difficult it is to distribute these pathogens to everyone without anyone noticing. It’s even harder today because powerful governments invest significant resources into preventing and containing an outbreak. If large numbers of people start getting sick and dropping dead at a rapid rate, then someone will notice and take action.

That’s why, for the most part, biological weapons are both ethically untenable and not very useful as weapons of mass destruction. They’re difficult to control, difficult to distribute, and have unpredictable effects. They also require immense resources, considerable technical know-how, and a keen understanding of science. Thankfully, these are all things that extreme religious zealots tend to lack.

For the most part, these powerful constraints have kept biological weapons from being too great a threat. However, recent advances in biotechnology could change that and it’s here where I really start to worry. With recent advances in gene-editing and the emergence of tools like CRISPR, those limitations that kept biological weapons in check may no longer be insurmountable.

While I’ve done plenty to highlight all the good that tools like CRISPR could do, I don’t deny that there are potential dangers. Like nuclear weapons, this technology is undeniably powerful and powerful technology always carries great risks. With CRISPR, the risks aren’t as overt as obvious as fiery mushroom clouds, but they can be every bit as deadly.

In theory, CRISPR makes it possible to cut and paste genetic material with the same ease as arranging scattered puzzle pieces. With right materials and tools, this technology could be used to create genetic combinations in organisms that could never occur naturally or even with artificial selection.

Imagine a strain of smallpox that was lethal 100 percent of the time and just as infectious.

Imagine a strain of the flu that was as easy to spread as the common cold, but as deadly as bubonic plague.

Imagine a strain of an entirely new pathogen that is extremely lethal and completely immune to all modern medicine.

These are all possible, albeit exceedingly difficult, with genetic editing. Unlike nuclear weapons, it doesn’t require the procurement of expensive and dangerous elements. It just needs DNA, RNA, and a lab with which to produce them. It’s a scary idea, but that’s actually not the worst of it, nor is it the one that worries me most.

A doomsday bioweapon like that might be appealing to generic super-villains, but like nuclear weapons, they’re not very strategic because they kill everyone and everything. For those with a more strategic form of blood-lust, advanced biological weapons offer advantages that sets them apart from any other weapon.

Instead of a pathogen infecting everyone it comes into contact with, what if it only infected a certain group of people that carry a specifics traits associated with a particular race or ethnic group? What if someone wanted to be even more strategic than that and craft a pathogen that attacked only one specific person?

In principle, this is possible if you can manipulate the genetics of a disease in just the right way. Granted, it’s extremely difficult, but the potential utility makes it more useful than a nuclear bomb will ever be.

Suddenly, a government or terrorist organization doesn’t need a skilled assassin on the level of James Bond to target a specific person or group. They just need the right genetic material and a working knowledge of how to program it into a synthetic pathogen. It could even be made to look like a completely different disease, which ensured it didn’t raise any red flags.

It’s not the ultimate weapon, but it’s pretty darn close. Biological weapons with this level of refinement could potentially target entire groups of people and never put the attackers at risk. As a strategy, it can effectively end an entire conflict without a shot being fired. Those infected wouldn’t even know it was fired if the pathogen were effectively distributed.

It’s one of those weapons that both terrorists and governments would be tempted to use. The most distressing part is they could use it in a way that’s difficult to detect, let alone counter. Even after all the death and destruction has been wrought, how do you even prove that it was a result of a bioweapon? Even if you could prove that, how would you know who made it?

These are the kinds of questions that only have disturbing answers. They’re also the reasons why I believe biological weapons are poised to become a far bigger issue in the coming years. Even if it’s unlikely they’ll wipe out the human race, they can still cause a special kind of destruction that’s almost impossible to counter.

Unlike any other weapon, though, the destruction could be targeted, undetectable, and unstoppable. Those who wield this technology would have the power to spread death with a level of precision and tact unprecedented in human history. While I believe that humanity will eventually be able to handle dangerous technology like artificial intelligence, I doubt it’ll ever be capable of handling a weapon like that.

2 Comments

Filed under biotechnology, CRISPR, Current Events, futurism, technology

Killer Robots, Drone Warfare, And How Artificial Intelligence Might Impact Both

920x515_charlene

On November 5, 2001, the history of warfare changed forever. On that date, an unmanned Predator drone armed with hellfire missiles killed Mohammed Atef, a known Al-Qaida military chief and the son-in-law to Osama Bin Laden. From a purely strategic standpoint, this was significant in that it proved the utility of a new kind of weapon system. In terms of the bigger picture, it marked the start of a new kind of warfare.

If the whole of human history has taught us anything, it’s that the course of that history changes when societies find new and devastating ways to wage war. In ancient times, to wage war, you needed to invest time and resources to train skilled warriors. That limited the scope and scale of war, although some did make the most of it.

Then, firearms came along and suddenly, you didn’t need a special warrior class. You just needed to give someone a gun, teach them how to use it, and organize them so that they could shoot in a unit. That raised both the killing power and the devastating scale of war. The rise of aircraft and bombers only compounded that.

In the 20th century, warfare became so advanced and so destructive that the large-scale wars of the past just aren’t feasible anymore. With the advent of nuclear weapons, the potential dangers of such a war are so great that no spoils are worth it anymore. In the past, I’ve even noted that the devastating power of nuclear weapons have had a positive impact on the world, albeit for distressing reasons.

Now, drone warfare has added a new complication. Today, drone strikes are such a common tactic that it barely makes the news. The only time they are noteworthy is when one of those strikes incurs heavy civilian casualties. It has also sparked serious legal questions when the targets of these strikes are American citizens. While these events are both tragic and distressing, there’s no going back.

Like gunpowder before it, the genie is out of the bottle. Warfare has evolved and will never be the same. If anything, the rise of combat drones will only accelerate the pace of change with respect to warfare. Like any weapon before it, some of that change will be negative, as civilian casualties often prove. However, there also potential benefits that could change more than just warfare.

Those benefits aren’t limited to keeping keep soldiers out of combat zones. From a cost standpoint, drones are significantly cheaper. A single manned F-22 Raptor costs approximately $150 million while a single combat drone costs about $16 million. That makes drones 15 times cheaper and you don’t need to be a combat ace to fly one.

However, those are just logistical benefits. It’s the potential that drones have in conjunction with advanced artificial intelligence that could make them every bit as influential as nuclear weapons. Make no mistake. There’s plenty of danger in that potential. There always is with advanced AI. I’ve even talked about some of those risks. Anyone who has seen a single “Terminator” movie understands those risks.

When it comes to warfare, though, risk tolerance tends to be more complicated than anything you see in the movies. The risks of AI and combat drones have already sparked concerns about killer robots in the military. As real as those risks are, there’s another side to that coin that rarely gets discussed.

Think back to any story involving a drone strike that killed civilians. There are plenty of incidents to reference. Those drones didn’t act on orders from Skynet. They were ordered by human military personnel, attempting to make tactical decision on whatever intelligence they had available at the time. The drones may have done the killing, but a human being gave the order.

To the credit of these highly trained men and women in the military, they’re still flawed humans at the end of the day. No matter how ethically they conduct themselves, they’re ability to assess, process, and judge a situation is limited. When those judgments have lives on the line, both the stakes and the burdens are immense.

Once more advanced artificial intelligence enters the picture, the dynamics for drone warfare changes considerably. This isn’t pure speculation. The United States Military has gone on record saying they’re looking for ways to integrate advanced AI into combat drones. While they stopped short of confirming they’re working on their own version of Skynet, the effort to merge AI and combat drones is underway.

In an overly-simplistic way, they basically confirmed they’re working on killer robots. They may not look like the Terminator or Ultron, but their function is similar. They’re programmed with a task and that task may or may not involve killing an enemy combatant. At some point, a combat drone is going to kill another human being purely based on AI.

That assumes it hasn’t already happened. It’s no secret that the United States Military maintains shadowy weapons programs that are often decades ahead of their time. Even if it hasn’t happened yet, it’s only a matter of time. Once an autonomous drone kills another human being, we’ll have officially entered another new era of warfare.

In this era, there are no human pilots directing combat drones from afar. There’s no human being pulling the trigger whenever a drone launches its lethal payload into a combat situation. The drones act on their own accord. They assess all the intel they have on hand, process it at speeds far beyond that of any human, and render decisions in an instant.

It sounds scary and it certainly is. Plenty of popular media, as well as respected public figures, paint a terrifying picture of killer robots killing without remorse or concern. However, those worst-case-scenarios overlook both the strategic and practical aspect of this technology.

In theory, a combat drone with sufficiently advanced artificial intelligence will be more effective than any human pilot could ever be in a military aircraft. It could fly better, carrying out maneuvers that would strain or outright kill even the most durable pilots. It could react better under stressful circumstances. It could even render better judgments that save more lives.

Imagine, for a moment, a combat drone with systems and abilities so refined that no human pilot or officer could hope to match it. This drone could fly into a war zone, analyze a situation, zero in on a target, and attack with such precision that there’s little to no collateral damage.

If it wanted to take a single person out, it could simply fire a high-powered laser that hits them right in the brain stem.

If it wants to take out someone hiding in a bunker, it could utilize a smart bullet or a rail gun that penetrates every level of shielding and impacts only a limited area.

If it wants to take out something bigger, it could coordinate with other drones to hit with traditional missiles in such a way that it had no hope of defending itself.

Granted, drones this advanced probably won’t be available on the outset. Every bit of new technology goes through a learning curve. Just look at the first firearms and combat planes for proof of that. It takes time, refinement, and incentive to make a weapons system work. Even before it’s perfected, it’ll still have an impact.

At the moment, the incentives are definitely there. Today, the general public has a very low tolerance for casualties on both sides of a conflict. The total casualties of the second Iraq War currently sit at 4,809 coalition forces and 150,000 Iraqis. While that’s only a fraction of the casualties suffered in the Vietnam War, most people still deem those losses unacceptable.

It’s no longer feasible, strategically or ethically, to just blow up an enemy and lay waste to the land around them. Neither politics nor logistics will allow it. In an era where terrorism and renegade militias pose the greatest threat, intelligence and precision matter. Human brains and muscle just won’t cut it in that environment. Combat drones, if properly refined, can do the job.

Please note that’s a big and critical if. Like nuclear weapons, this a technology that nobody in any country can afford to misuse. In the event that a combat drone AI develops into something akin to Skynet or Ultron, then the amount of death and destruction it could bring is incalculable. These systems are already designed to kill. Advanced AI will just make them better at killing than any human will ever be.

It’s a worst-case scenario, but one we’ve managed to avoid with nuclear weapons. With advanced combat drones, the benefits might be even greater than no large-scale wars on the level of Word War II. In a world where advanced combat drones keep terrorists and militias from ever becoming too big a threat, the potential benefits could be unprecedented.

Human beings have been waging bloody, brutal wars for their entire history. Nuclear weapons may have made the cost of large wars too high, but combat drones powered by AI may finally make it obsolete.

8 Comments

Filed under Artificial Intelligence, Current Events, futurism, technology

Vasili Arkhipov: Another Man Who Saved The World

It’s amazing and sobering to think that there was a point in human history where one person held the fate of the entire world in their hands. We’re a species that can barely be trusted with fireworks. How can we possibly be trusted with weapons so deadly they can end all life on this planet?

Whether we can be trusted or not, these weapons cannot be uninvented. They’re here and they’re a part of the human condition. I’ve made the argument that in some ways, they have benefited the human race. That doesn’t make all the times we’ve come distressingly close to nuclear annihilation any less distressing.

Last month, I highlighted a man who, at one point, held the lives of every person on this planet in his hand. His name was Stanislav Petrov and this year, he passed away shortly before the 34th anniversary of the 1983 nuclear false alarm that almost triggered nuclear war.

That incident was horrifying in that it came so distressingly close to unleashing a nuclear war and due to a computer malfunction, no less. However, there was another incident two decades before that, one that was also disturbingly close in terms of unleashing a nuclear holocaust. Today, October 27th, happens to mark the anniversary of that terrifying, yet sobering event.

Unlike the 1983 incident, though, this is not one of those lesser-known incidents that got swept under the rug for several decades. This involved something called the Cuban Missile Crisis, a terrifying event that most kids learn about in school.

The official story is fairly well-documented. Cuba becomes a communist state, the Soviet Union tries to base mid-ranged ballistic missiles there, the United States is not okay with that, tensions escalate, and eventually, the situation diffuses when both sides realize that negotiating is a lot easier than nuclear war.

Most kids probably know names like Kennedy, Khrushchev, and Castro. They’re the main players who get the credit/blame for bringing the world to the brink of nuclear war. However, there’s one name that few in America or Russia know about. Like Stanislav Petrov, his name is largely an afterthought in history, but at one point, he literally held the fate of humanity in his hand.

His name is Vasili Arkhipov. He was a high-ranking officer in the Soviet navy. What he did on October 27th, 1962, may very well have saved the world, as we know it. The fact that he did this at a time during one of the most tense standoffs in history makes his accomplishment all the more remarkable.

To appreciate what he did, it’s important to understand just how close the Cuban Missile Crisis came to going nuclear. At one point, a Soviet nuclear-armed submarine was stationed in international waters near Cuba. It was then detected by US destroyers and, in a message that seems pretty mixed in hindsight, began dropping depth charges to force it to surface.

Keep in mind, this sub is armed with nuclear weapons. They also had the authority from Moscow to launch those nuclear weapons if they suspected that war had already started. Once those depth charges started dropping, it’s easy to understand why the officers on board thought that the bombs were already dropping.

That’s where Vasili Arkhipov enters the picture. On that sub, every ranking officer believed that they had to launch. Their sub was running out of air and as far as they knew, if they surfaced, they would only be inhaling radioactive fallout. Everyone on that sub voted to launch. The only one who didn’t was Vasili Arkhipov and because of that, the launch never happened.

Why did Arkhipov’s vote carry so much weight? Well, before he had the fate of the world in his hands, he’d already distinguished himself in an incident involving an ill-fated Soviet submariner called K-19. Like the Cuban Missile Crisis, it resulted in a sub-par movie.

However, the movie did get one thing right. Vasili Arkhipov was a badass who knew how to make hard decisions. Why else would he have been played by Harrison Ford? That incident established Arkhipov as someone whose voice carried more weight than most in the Soviet navy. Even when he was outvoted and outranked, he could make decisions and other people would follow them.

That’s a big part of what sets someone like Vasili Arkhipov apart from Stanislav Petrov. You could argue that Petrov was just in the right place at the right time to make the right decision. That alone makes him a hero.

With Arkhipov, the decision wasn’t as clear-cut. He was an officer in the navy. He had a rank, a responsibility, and a role in one of the most tense geopolitical situations in the history of mankind. He was in a floating coffin surrounded by enemy ships with every other officer wanting to launch a nuclear strike.

He could’ve easily chosen to go along with his fellow officers. Given how badly humans respond to peer pressure, that would’ve been the easiest thing for Arkhipov to do. The fact he chose otherwise is a testament to his ability to do the right thing in a moment where the right thing is hard to grasp.

To appreciate just how hard that decision was, Arkhipov wasn’t even praised for his decision. In fact, he and his crew were disgraced for surfacing in the first place. Sure, he averted a nuclear holocaust, but he didn’t follow the proper protocol that he should’ve after being discovered by the Americans.

Granted, that protocol didn’t involve starting a nuclear war, but it showed weakness during a geopolitical shit storm. This wasn’t like the the 1983 incident in that it could be swept under the rug. This was the Cuban Missile Crisis. You don’t call something a crisis unless it’s that big a deal.

That’s what makes Vasili Arkhipov’s actions on that day all the more remarkable. He was not praised, commended, or even celebrated for his decision that day. He might not have even realized that he had the fate of the human race in his hands at that moment. Remember, he made that decision at a time when, for all he new, nuclear war had already started. Despite that, he did the right thing.

Unlike Stanislav PetrovVasili Arkhipov didn’t live long enough to see his accomplishments acknowledged. He died in 1988 in relative obscurity. He may not have a holiday or a monument named after him, but like Stanislav Petrov, he made a decision that saved the world.

Image result for vasili arkhipov grave

It’s still scary to think that any one person was in a position to make such a decision to begin with. However, the fact that both he and Petrov were able to do it says more about humanity than any weapon ever will.

4 Comments

Filed under Current Events

Just How Close Have We Come (And How Close ARE We) To Nuclear War?

For most of human history, we could take comfort in one simple fact. No matter how brutish, crude, or stupid we were, from burning witches to fighting wars over a stray dog, we could never screw up so badly that we would destroy our entire world. Sure, we could leave some pretty noticeable scars, but we could never outright destroy it.

That all changed on July 16, 1945 in Los Alamos, New Mexico when the first atomic bomb was detonated. It’s impossible to overstate how significant that moment was in the history of the human race and not just because it helped end World War II, thereby inspiring countless war movies for decades to come.

For the first time in the history of planet Earth, a species that had evolved to hunt, gather, and pick nuts out of elephant shit had the means to wipe itself out, along with most other life. At the height of the Cold War, there were approximately 64,500 active nuclear warheads. That’s enough destructive power to kill every person in the world, and their pets, many times over.

While the number of live nuclear warheads at the global level has decreased, they still have plenty of destructive power to both wipe out our species and render large chunks of the world uninhabitable to any species less hardy than a cockroach. These are, by and large, the most dangerous items mankind has ever created and that includes machine guns, nerve gas, and fidget spinners.

The very existence of these weapons says a lot about the state of our species and where it came from, more so than I can cover in a single blog post. However, in wake of the 35th anniversary of the day when the world, as we know it, almost ended, I think it’s worth emphasizing just how skilled/lucky/crazy we are to still live in an intact world.

Despite the undeniable danger of nuclear weapons, we don’t always treat them with the same care that we would treat the latest iPhone. Several years ago, John Oliver dedicated an entire show to highlighting the sorry state of America’s nuclear arsenal. Even if you only believe half of what a comedy news show tells you, it’s hard to take much comfort when weapons of mass destruction are involved.

What happened on September 26th, 1983 was terrifying in just how close we came to nuclear war. Many would make the argument that this incident was the closest we, as a species, came to destroying ourselves. I would tend to agree with that argument. Unfortunately, it’s one of those arguments that has an uncomfortable breadth of details.

It’s true. There have been more incidents that could’ve easily escalated to terrifying levels. Some were simple accidents that could’ve warranted far more than a demotion. Some where intense, geopolitical ordeals that went onto inspire major Hollywood movies starring Kevin Costner.

In any case, the stakes were painfully high. You literally can’t get much higher than a nuclear war that wipes out billions. We’ve managed to avoid it, but we’ve come so uncomfortably close that it’s a miracle the world is still spinning. A video from the YouTube channel AllTimeTop10s nicely documents some of these incidents. If you feel like you’re having a bad day, this should help provide some context.

https://youtu.be/uaHvURLvPgg

I’ll give everyone a moment to catch their breath, vomit, or a combination of the two. I promise nobody would blame you. Knowing how close we came to nuclear war and how bad it could’ve been, we should all share in a collective sigh of relief every day.

However, as bad as these past cases have been, there’s no guarantee that we won’t face something similar in the future. There’s also no guarantee that there will be someone like Santislav Petrov to make the right decision when those situations come around.

That said, the situation today is very different than what it was during the Cold War. Say what you will about ongoing talking points about Russia. It’s not even in the same hemisphere at it was in the 50s and 60s when the United States and Russia seemed eager for an opportunity to go to war.

The world of geopolitics has evolved, in many ways, beyond the concept of two competing superpowers engaging in a nuclear dick-measuring contest. These days, increased globalism and a more interconnected economy makes that kind of geopolitical strategy untenable and counterproductive.

In a sense, globalization and the economic bounty that came with it made war of any kind, nuclear or otherwise, a losing endeavor. As I’ve noted before, even the most evil billionaires in the world prefer that the world remain intact so they can keep enjoying their billions. That’s just common sense and shameless self-interest.

That might offer some comfort, but there are those much smarter than I’ll ever be who still have concerns. According to the Bulletin of Atomic Scientists, who have been gauging the likelihood of nuclear war for decades, we’re two-and-a-half minutes to midnight. This is their statement on the matter.

For the last two years, the minute hand of the Doomsday Clock stayed set at three minutes before the hour, the closest it had been to midnight since the early 1980s. In its two most recent annual announcements on the Clock, the Science and Security Board warned: “The probability of global catastrophe is very high, and the actions needed to reduce the risks of disaster must be taken very soon.” In 2017, we find the danger to be even greater, the need for action more urgent. It is two and a half minutes to midnight, the Clock is ticking, global danger looms. Wise public officials should act immediately, guiding humanity away from the brink. If they do not, wise citizens must step forward and lead the way.

Since I’m an aspiring erotica/romance writer and not an atomic scientist, I am woefully unqualified to contest the conclusions of these individuals, let alone argue them. They cite a new wave of tensions between Russia and the United States, as well as the nuclear ambitions of North Korea. These are not the same conflicts that fueled the Cold War and that uncertainty has many understandably spooked.

Me being the optimist I am, I tend to believe that world leaders, however deranged or misguided they may be, prefer that the world remain intact. Nobody wants to be the leader of a smoldering pile of ash. There’s no way to build a palace, a harem, or a giant golden statue of themselves on a foundation of ash. That’s as good an incentive as anyone can hope for in avoiding nuclear war.

Unfortunately, human beings don’t always act rationally and are prone to making stupid decisions that change the course of history. One mistake in a situation involving nuclear weapons might be all it takes. Only time will tell, but the extent to which we’ve survived thus far should give us all reasons to be hopeful and thankful.

7 Comments

Filed under Current Events, Jack Fisher's Insights