Tag Archives: warfare

Why Biological Weapons Will Be A (MUCH) Bigger Threat In The Future

diabolical-biological-warfare

It wasn’t too long ago that the biggest existential threat facing humanity was nuclear war. I’ve noted before how distressingly close we’ve come to a nuclear disaster and how the threat of a nuclear holocaust is still present. However, that threat has abated in recent decades, especially as nuclear weapons have gotten so destructive that their use is somewhat redundant.

More recently, people have become more concerned about the threat posed by advanced artificial intelligence. The idea is that at some point, an AI will become so intelligent and capable that we won’t be able to stop it in the event it decides that humanity must go extinct. It’s the basis of every Terminator movie, as well as an Avengers movie.

While I certainly have my concerns about the dangers of advanced artificial intelligence, it’s not the threat that worries me most these days. We still have some measure of control over the development of AI and we’re in a good position to guide that technology down a path that won’t destroy the human race. The same cannot be said for biological weapons.

If there’s one true threat that worries me more with each passing day, it’s that. Biological weapons are one of those major threats that does not slip under the radar, as evidenced by plenty of movies, books, and TV shows. However, the extent of that threat has become more understated in recent years and has the potential to be something more powerful than nuclear weapons.

By powerful, I don’t necessarily mean deadlier. At the end of the day, nuclear weapons are still more capable of rendering the human race extinct and turning the whole planet into a radioactive wasteland. The true power of biological weapons less about how deadly they can be and more about how useful they could be to potential governments, tyrants, or extremists.

For most of human history, that power has been limited. There’s no question that disease has shaped the course of human history. Some plagues are so influential that they mark major turning points for entire continents. The same can be said for our ability to treat such diseases. However, all these diseases had one fatal flaw that kept them from wiping out the human race.

Thanks to the fundamental forces of evolution, a deadly pathogen can only be so deadly and still survive. After all, an organism’s ultimate goal isn’t to kill everything it encounters. It’s to survive and reproduce. It can’t do that if it kills a carrier too quickly. If it’s too benign, however, then the carrier’s immune system will wipe it out.

That’s why even diseases as deadly as Ebola and Influenza can only be so infectious. If they kill all their hosts, then they die with them. That’s why, much to the chagrin of creationists, evolution doesn’t favor the natural emergence of apocalyptic diseases. They can still devastate the human race, but they can’t necessarily wipe it out. It would only wipe itself out in the process and most lifeforms avoid that.

It’s also why the large-scale biological weapons programs of the 20th century could only be so effective. Even if a country manufactured enough doses of an existing disease to infect every person on the planet, it won’t necessarily be deadly enough to kill everyone. Even at its worst, smallpox and bubonic plague never killed more than two-thirds of those it infected.

That’s not even factoring in how difficult it is to distribute these pathogens to everyone without anyone noticing. It’s even harder today because powerful governments invest significant resources into preventing and containing an outbreak. If large numbers of people start getting sick and dropping dead at a rapid rate, then someone will notice and take action.

That’s why, for the most part, biological weapons are both ethically untenable and not very useful as weapons of mass destruction. They’re difficult to control, difficult to distribute, and have unpredictable effects. They also require immense resources, considerable technical know-how, and a keen understanding of science. Thankfully, these are all things that extreme religious zealots tend to lack.

For the most part, these powerful constraints have kept biological weapons from being too great a threat. However, recent advances in biotechnology could change that and it’s here where I really start to worry. With recent advances in gene-editing and the emergence of tools like CRISPR, those limitations that kept biological weapons in check may no longer be insurmountable.

While I’ve done plenty to highlight all the good that tools like CRISPR could do, I don’t deny that there are potential dangers. Like nuclear weapons, this technology is undeniably powerful and powerful technology always carries great risks. With CRISPR, the risks aren’t as overt as obvious as fiery mushroom clouds, but they can be every bit as deadly.

In theory, CRISPR makes it possible to cut and paste genetic material with the same ease as arranging scattered puzzle pieces. With right materials and tools, this technology could be used to create genetic combinations in organisms that could never occur naturally or even with artificial selection.

Imagine a strain of smallpox that was lethal 100 percent of the time and just as infectious.

Imagine a strain of the flu that was as easy to spread as the common cold, but as deadly as bubonic plague.

Imagine a strain of an entirely new pathogen that is extremely lethal and completely immune to all modern medicine.

These are all possible, albeit exceedingly difficult, with genetic editing. Unlike nuclear weapons, it doesn’t require the procurement of expensive and dangerous elements. It just needs DNA, RNA, and a lab with which to produce them. It’s a scary idea, but that’s actually not the worst of it, nor is it the one that worries me most.

A doomsday bioweapon like that might be appealing to generic super-villains, but like nuclear weapons, they’re not very strategic because they kill everyone and everything. For those with a more strategic form of blood-lust, advanced biological weapons offer advantages that sets them apart from any other weapon.

Instead of a pathogen infecting everyone it comes into contact with, what if it only infected a certain group of people that carry a specifics traits associated with a particular race or ethnic group? What if someone wanted to be even more strategic than that and craft a pathogen that attacked only one specific person?

In principle, this is possible if you can manipulate the genetics of a disease in just the right way. Granted, it’s extremely difficult, but the potential utility makes it more useful than a nuclear bomb will ever be.

Suddenly, a government or terrorist organization doesn’t need a skilled assassin on the level of James Bond to target a specific person or group. They just need the right genetic material and a working knowledge of how to program it into a synthetic pathogen. It could even be made to look like a completely different disease, which ensured it didn’t raise any red flags.

It’s not the ultimate weapon, but it’s pretty darn close. Biological weapons with this level of refinement could potentially target entire groups of people and never put the attackers at risk. As a strategy, it can effectively end an entire conflict without a shot being fired. Those infected wouldn’t even know it was fired if the pathogen were effectively distributed.

It’s one of those weapons that both terrorists and governments would be tempted to use. The most distressing part is they could use it in a way that’s difficult to detect, let alone counter. Even after all the death and destruction has been wrought, how do you even prove that it was a result of a bioweapon? Even if you could prove that, how would you know who made it?

These are the kinds of questions that only have disturbing answers. They’re also the reasons why I believe biological weapons are poised to become a far bigger issue in the coming years. Even if it’s unlikely they’ll wipe out the human race, they can still cause a special kind of destruction that’s almost impossible to counter.

Unlike any other weapon, though, the destruction could be targeted, undetectable, and unstoppable. Those who wield this technology would have the power to spread death with a level of precision and tact unprecedented in human history. While I believe that humanity will eventually be able to handle dangerous technology like artificial intelligence, I doubt it’ll ever be capable of handling a weapon like that.

1 Comment

Filed under biotechnology, CRISPR, Current Events, futurism, technology

Killer Robots, Drone Warfare, And How Artificial Intelligence Might Impact Both

920x515_charlene

On November 5, 2001, the history of warfare changed forever. On that date, an unmanned Predator drone armed with hellfire missiles killed Mohammed Atef, a known Al-Qaida military chief and the son-in-law to Osama Bin Laden. From a purely strategic standpoint, this was significant in that it proved the utility of a new kind of weapon system. In terms of the bigger picture, it marked the start of a new kind of warfare.

If the whole of human history has taught us anything, it’s that the course of that history changes when societies find new and devastating ways to wage war. In ancient times, to wage war, you needed to invest time and resources to train skilled warriors. That limited the scope and scale of war, although some did make the most of it.

Then, firearms came along and suddenly, you didn’t need a special warrior class. You just needed to give someone a gun, teach them how to use it, and organize them so that they could shoot in a unit. That raised both the killing power and the devastating scale of war. The rise of aircraft and bombers only compounded that.

In the 20th century, warfare became so advanced and so destructive that the large-scale wars of the past just aren’t feasible anymore. With the advent of nuclear weapons, the potential dangers of such a war are so great that no spoils are worth it anymore. In the past, I’ve even noted that the devastating power of nuclear weapons have had a positive impact on the world, albeit for distressing reasons.

Now, drone warfare has added a new complication. Today, drone strikes are such a common tactic that it barely makes the news. The only time they are noteworthy is when one of those strikes incurs heavy civilian casualties. It has also sparked serious legal questions when the targets of these strikes are American citizens. While these events are both tragic and distressing, there’s no going back.

Like gunpowder before it, the genie is out of the bottle. Warfare has evolved and will never be the same. If anything, the rise of combat drones will only accelerate the pace of change with respect to warfare. Like any weapon before it, some of that change will be negative, as civilian casualties often prove. However, there also potential benefits that could change more than just warfare.

Those benefits aren’t limited to keeping keep soldiers out of combat zones. From a cost standpoint, drones are significantly cheaper. A single manned F-22 Raptor costs approximately $150 million while a single combat drone costs about $16 million. That makes drones 15 times cheaper and you don’t need to be a combat ace to fly one.

However, those are just logistical benefits. It’s the potential that drones have in conjunction with advanced artificial intelligence that could make them every bit as influential as nuclear weapons. Make no mistake. There’s plenty of danger in that potential. There always is with advanced AI. I’ve even talked about some of those risks. Anyone who has seen a single “Terminator” movie understands those risks.

When it comes to warfare, though, risk tolerance tends to be more complicated than anything you see in the movies. The risks of AI and combat drones have already sparked concerns about killer robots in the military. As real as those risks are, there’s another side to that coin that rarely gets discussed.

Think back to any story involving a drone strike that killed civilians. There are plenty of incidents to reference. Those drones didn’t act on orders from Skynet. They were ordered by human military personnel, attempting to make tactical decision on whatever intelligence they had available at the time. The drones may have done the killing, but a human being gave the order.

To the credit of these highly trained men and women in the military, they’re still flawed humans at the end of the day. No matter how ethically they conduct themselves, they’re ability to assess, process, and judge a situation is limited. When those judgments have lives on the line, both the stakes and the burdens are immense.

Once more advanced artificial intelligence enters the picture, the dynamics for drone warfare changes considerably. This isn’t pure speculation. The United States Military has gone on record saying they’re looking for ways to integrate advanced AI into combat drones. While they stopped short of confirming they’re working on their own version of Skynet, the effort to merge AI and combat drones is underway.

In an overly-simplistic way, they basically confirmed they’re working on killer robots. They may not look like the Terminator or Ultron, but their function is similar. They’re programmed with a task and that task may or may not involve killing an enemy combatant. At some point, a combat drone is going to kill another human being purely based on AI.

That assumes it hasn’t already happened. It’s no secret that the United States Military maintains shadowy weapons programs that are often decades ahead of their time. Even if it hasn’t happened yet, it’s only a matter of time. Once an autonomous drone kills another human being, we’ll have officially entered another new era of warfare.

In this era, there are no human pilots directing combat drones from afar. There’s no human being pulling the trigger whenever a drone launches its lethal payload into a combat situation. The drones act on their own accord. They assess all the intel they have on hand, process it at speeds far beyond that of any human, and render decisions in an instant.

It sounds scary and it certainly is. Plenty of popular media, as well as respected public figures, paint a terrifying picture of killer robots killing without remorse or concern. However, those worst-case-scenarios overlook both the strategic and practical aspect of this technology.

In theory, a combat drone with sufficiently advanced artificial intelligence will be more effective than any human pilot could ever be in a military aircraft. It could fly better, carrying out maneuvers that would strain or outright kill even the most durable pilots. It could react better under stressful circumstances. It could even render better judgments that save more lives.

Imagine, for a moment, a combat drone with systems and abilities so refined that no human pilot or officer could hope to match it. This drone could fly into a war zone, analyze a situation, zero in on a target, and attack with such precision that there’s little to no collateral damage.

If it wanted to take a single person out, it could simply fire a high-powered laser that hits them right in the brain stem.

If it wants to take out someone hiding in a bunker, it could utilize a smart bullet or a rail gun that penetrates every level of shielding and impacts only a limited area.

If it wants to take out something bigger, it could coordinate with other drones to hit with traditional missiles in such a way that it had no hope of defending itself.

Granted, drones this advanced probably won’t be available on the outset. Every bit of new technology goes through a learning curve. Just look at the first firearms and combat planes for proof of that. It takes time, refinement, and incentive to make a weapons system work. Even before it’s perfected, it’ll still have an impact.

At the moment, the incentives are definitely there. Today, the general public has a very low tolerance for casualties on both sides of a conflict. The total casualties of the second Iraq War currently sit at 4,809 coalition forces and 150,000 Iraqis. While that’s only a fraction of the casualties suffered in the Vietnam War, most people still deem those losses unacceptable.

It’s no longer feasible, strategically or ethically, to just blow up an enemy and lay waste to the land around them. Neither politics nor logistics will allow it. In an era where terrorism and renegade militias pose the greatest threat, intelligence and precision matter. Human brains and muscle just won’t cut it in that environment. Combat drones, if properly refined, can do the job.

Please note that’s a big and critical if. Like nuclear weapons, this a technology that nobody in any country can afford to misuse. In the event that a combat drone AI develops into something akin to Skynet or Ultron, then the amount of death and destruction it could bring is incalculable. These systems are already designed to kill. Advanced AI will just make them better at killing than any human will ever be.

It’s a worst-case scenario, but one we’ve managed to avoid with nuclear weapons. With advanced combat drones, the benefits might be even greater than no large-scale wars on the level of Word War II. In a world where advanced combat drones keep terrorists and militias from ever becoming too big a threat, the potential benefits could be unprecedented.

Human beings have been waging bloody, brutal wars for their entire history. Nuclear weapons may have made the cost of large wars too high, but combat drones powered by AI may finally make it obsolete.

Leave a comment

Filed under Artificial Intelligence, Current Events, futurism, technology