Tag Archives: Weak AI

Self-Driving Cars Are Already Saving Drunk Drivers: The Promise And The Implications

Self-Driving Cars: Everything You Need to Know | Kelley Blue Book

According to the Centers for Disease Control, 10,497 people died in traffic accidents caused by drunk driving in 2016 alone. That accounted for 28 percent of all traffic-related deaths in the United States. A non-insignificant chunk of those deaths were children. Even if you’re not good at math, you know that is not a trivial figure.

There’s also a good chance you know someone who has been hurt or worse because of a drunk driver. This sort of thing is personal for me because one of my cousins was killed by a drunk driver many years ago. He wasn’t even drinking. He was just unlucky enough to be in the back seat of the car at the time.

It’s an issue that has existed for as long as cars. It’s also an issue that policy makers and car manufacturers have tried to address through awareness programs and safety features. However, these measures can only do so much. So long as human beings are drinking and driving cars, this will be an issue.

That dynamic will likely change considerably when self-driving cars enter the picture. To some extent, they’re already making an impact. You can buy a car today that has some measure of self-driving features. They’re still not fully autonomous, but we’ve taken the first critical steps. From here on it, it’s just a matter of refinement.

Even though it might be years before self-driving cars are common, they’re already making an impact and it’s not just in terms of sheer novelty. Very recently, a Tesla Model S, which has an autopilot feature, did something cars of old could never do.

It saved a drunk driver who passed out behind the wheel, which likely saved or prevented serious injuries to others around him. Here are the details, according to the site, Telsarati.

Teslarati: Tesla Autopilot prevents drunk driver from making a fatal mistake

As explained by the Eastern Police District on its official Twitter account, a 24-year-old Tesla owner ended up passing out while driving his Model S. Fortunately for the driver, the vehicle’s Autopilot system was activated, which allowed the Model S to stay in its lane without causing trouble to other drivers.

Upon detecting that its driver was unresponsive, the vehicle eventually came to a stop and engaged its hazards. The man was later attended to by emergency services. No one was injured in the incident.

The police noted that the Tesla driver, who was found unconscious in the Model S, was evidently drunk, though he denied that he was driving. Video evidence showing the Tesla owner passed out in the driver’s seat have been shared online, however. The police stated that necessary tests have been taken, and that the Tesla owner’s driver’s license has been temporarily suspended. A case has also been filed against the driver.

Such an incident could have easily been a lot worse. It is very easy for drunk drivers to harm themselves, after all, but what’s even worse is that they could very easily harm other people just as easily. These scenarios would likely not be as prevalent if vehicles are capable of safely stopping on their own once their human drivers are incapacitated.

The bolded text represents the most relevant details. Without these features, this incident could’ve played out like so many other drunk driving tragedies. A drunk driver passing out behind the wheel would’ve, at the very least, led to the car going off-road and crashing, thus resulting in significant injury. At worst, the driver could’ve hit another car, thus compounding the tragedy.

However, thanks to these emerging systems, that didn’t happen. The safeguards in the car worked. The only real harm done involve a hangover and a DUI. Compared to the alternative, that’s far more preferable.

We should not understate the importance of this development. Think back to that 10,497 figure from 2016. Thanks to the autopilot system in that Tesla, the figure for 2021 will be at least one less. It doesn’t eliminate the tragedy of drunk driving all at once, but it’s a start and an important one, at that.

Driving is inherently dangerous, but a lot of that danger comes from the people behind the wheel and not the machines themselves. Anything operated by a human is prone to human error. An autonomous system, even if it isn’t a full-fledged artificial intelligence, can and will mitigate those errors.

That’s not to say those same autopilot systems aren’t prone to error. They certainly are, but remember that this technology is still very new. The first cell phones couldn’t send an email or reliably play streaming video. That took time, energy, and better hardware.

At this very moment, car companies and tech companies are putting in all that work. There is a lot of potential profit in refining this technology. However, I would point out that you can’t put a price on human life and, as it stands, thousands will continue to die every year because of traffic accidents, especially drunk driving. This one incident might not seem like much in the grand scheme of things, but it’s still one tragedy averted, one accident prevented, and at least one life saved. For anyone who knows the pain of losing a loved one to drunk driving, that’s worth celebrating.

Leave a comment

Filed under Artificial Intelligence, futurism, robots, technology, Uplifting Stories

Would You Shop At A Store Run Entirely By Robots?

Will Smart Machines Kill Jobs or Create Better Ones? - The Washington Post

Recall the last time you went to the store. It doesn’t matter if it was your corner grocery store or some big box department store. All that matters is you went there to do some basic shopping, as we all end up having to do at some point. With that in mind, try and remember how many store clerks you saw.

Maybe some were working at cash registers.

Maybe some were stocking shelves.

Maybe some were sweeping floors or cleaning up messes.

The chances are you saw at least several. I remember seeing at least three the last time I went to a grocery store. That’s fairly typical. I know I used to see more before the days of self check-out lines, but I always saw people working at these stores, diligently doing the things necessary to keep it running.

For most of us, that’s a mundane sight. For every store we go to, we expect there to be human beings working there to keep it going. It’s part of the infrastructure that keeps these stores stocked. On top of that, seeing other human beings contributing gives us a sense of comfort in that this place is being run by real people with real faces.

Now, try and imagine a store that has no people working at it. You walk in the door and you never see another human being carrying out the various operations we expect of a functioning store. All that is now done by machines and robots. They’re the ones who stock the shelves, handle your money, and clean the messes.

Does that change the experience?

Does that make you more or less inclined to shop at that store?

These are relevant questions because, as I’ve noted before, robots and artificial intelligence are advancing rapidly. Automation is an ongoing trend that promises to have major economic ramifications. Some of those ramifications are already here. It’s one of the reason coal mining jobs will never be as prevalent as they once were.

Other ramifications haven’t arrived yet, but they will eventually come. The technology is there. The incentives are there. It’s just a matter of investing, refinement, and scale. Eventually, it will reach retail work, a sector that employs nearly 10 million people. That will have a major economic impact for large swaths of people.

Unlike other forms of automation, though, it’ll be a lot more visible.

Most of us never set foot in a factory where cars are made, much of which is done by robots. Most will never set foot in an Amazon or Walmart warehouse, which already use robots at a significant scale. The impact of just how much work is done by robots these days is not visible to most ordinary people.

That will not be the case with stores and retail work. Like I said, we all have to get out and shop every now and then. Even though online retail has become more prevalent, people still go to traditional brick and mortar stores. Even as online retail improves, that’s not likely to change.

However, how much will that experience change once robots start doing the jobs that humans have done for centuries?

How will that change the experience?

Will you, as a consumer, shop at a store that had no humans working there most of the time?

If you think this isn’t that far off, think again. Below is a video from an AI channel on YouTube that shows a robot using a bar code scanner for the first time. The process is a bit cumbersome, but the robot is able to handle it. It is able to receive instructions. Given the nature of how robots improve and refine their programming, it’s not unreasonable to assume that future robots will be able to carry out retail tasks more efficiently than any human worker.

It may not happen all at once. You probably won’t just walk into a store one day and notice that everyone was replaced by a robot. Like self check-out, it’ll likely happen gradually. Once it gets to a certain point, though, it’ll become mainstream very quickly. The incentives are just too strong.

You don’t need to be an economist to see those incentives. Robots don’t need to be paid. They don’t slack off on the job. They don’t get sick or tired. In theory, they could keep a store open 24/7 without ever paying overtime. For big box retailers like Walmart, the potential profits are just too large to ignore.

It won’t stop at stores, either. Restaurants will likely undergo a similar process. There are already working robots that can cook meals from scratch. Once they get refined and scaled, then it’s also likely you’ll one day eat at a restaurant entirely run by robots.

Would you be willing to eat at such a place?

Your answer will probably be similar to the one I asked earlier about whether you’d shop at a store run entirely by robots. Personally, I don’t think I’m ready to shop at a place that had no humans working in it, if only because robots sometimes break down. However, within my lifetime, it may get to a point where stores and restaurants run by humans become the exception rather than the norm.

Are we ready for that future?

I don’t know, but it’ll come whether we’re ready for it or not.

Leave a comment

Filed under futurism, robots, technology

An Artificial Intelligence That Can Debate: The Promise And The Perils

Even in an era as divided and chaotic as this, there’s still a place for reasonable debate on important issues.

Yes, I understand it’s hard to say that with a straight face these days.

Yes, I’ve been to comments sections, Reddit, and 4chan.

Yes, I know how ugly the discourse is right now, but that’s exactly why I’m bringing this up.

In general, people are curious. Even though they cling to cherished beliefs and refuse to change their minds when confronted with evidence, we’re still curious about things that are different. It’s not always honest, but it’s a powerful feeling. Even if you despise those on the other side of the debate, a part of you will still wonder why they hold the opinions they have.

That’s why debate is such a powerful tool. Humans are such a social species by nature. We’re hard-wired to interact, connect, and discuss things with one another, even when we don’t agree. It may frustrate us to no end, as anyone who has debated a creationist can attest. However, the fact we keep doing it is proof, in my opinion, that there’s still value in the process.

Regardless of how you feel about the merit and value of debating, the whole process might become more complicated in the near future. That’s because IBM, the same company behind Watson, the Jeopardy playing computer, just created an artificial intelligence that can debate at the same level as a skilled human debater.

Here’s an excerpt from a piece in TechXplore on this new system. It’s not as scary as some of the things we’ve seen from Boston Dynamics lately, but it’s still intriguing.

TechXplore: IBM’s AI debating system able to compete with expert human debaters

IBM has developed an artificial intelligence-based system designed to engage in debates with humans. In their paper published in the journal Nature, the team members describe their system and how well it performed when pitted against human opponents. Chris Reed with the University of Dundee has published a News & Views piece in the same journal issue outlining the history and development of AI technology based around the types of logic used in human arguments and the new system developed by IBM.

As Reed notes, debating is a skill humans have been honing for thousands of years. It is generally considered to be a type of discussion in which one or more people attempt to persuade others that their opinion on a topic is right. In this new effort, the team at IBM has created an AI system designed to debate with humans in a live setting. It listens to moderators and opponents and responds in a female voice.

Now, before you get too nervous, it’s worth noting that this AI is far from the kind of advanced artificial intelligence systems I’ve mentioned before. This is not the kind of AI that will become Skynet or Hal 9000, no more so than Watson or AlphaGo. This is a system is very much a narrow AI, as in it’s made to excel at a specific task.

We have AI’s that can beat world class chess players and Jeopardy champions. This AI just happens to excel at debate. However, that has implications that go beyond simply outclassing the best human debaters in the world at the moment. In fact, this is one form of AI that might not need human-level intelligence to incur a major impact.

Take a moment to think about how erratic and inconsistent most debates are. No matter how intelligent or informed you are, it tends to get messy fast. That’s why so many comments sections and Reddit threads devolve into shouting matches and personal attacks. The end result is people becoming more angry and unreasonable, which can have major real-world consequences.

However, what would happen if every debate on any issue included someone who was better than the best debater on the planet? Even if the two people on each side of the debate were dumb and inept, such a presence would have a major impact on the discourse.

That’s because winning a debate has little to do with how well someone knows a particular subject. It also has little to do with how forcefully and clearly someone explains a topic. Again, people who debate creationists know this all too well. Winning a debate doesn’t mean proving your expertise. It means proving the merit of your argument.

An AI that can do that may not convince someone that they’re wrong about something. However, losing a debate tends to have a lasting impact. Just ask any aspiring politician. It can also lead people to question their beliefs, even if they still cling to them. That, alone, can be powerful.

For proof, look no further than the story of Megan Phelps-Roper, a former member of the infamously hateful and dogmatic Westboro Baptist Church. She was as locked into her beliefs as anyone could possibly be. She was raised by religious zealots and indoctrinated into strict religious dogma from the time she was a child. She’s not someone whose mind is prone to change.

Then, she got into a discussion with someone on Twitter of all places. That person began a conversation. It started as a nasty debate, but it evolved into something that led her to question her beliefs. Ultimately, she left that hateful and bigoted environment. She’s now an activist against the same religiously motivated hate that she once fostered.

It’s a powerful story, but one that couldn’t have happened without a debate. To date, people have only been able to have those debates with other people. Not everyone is equally skilled. In fact, I would argue most people are woefully unskilled at debating anything and I include myself in that category. I am not good at it, either. I freely admit that.

Now, there’s an AI system that can theoretically win any debate the same way other systems can defeat any chess player. That does hold promise if it can be used to temper the heated rhetoric that has caused real-life hostilities. At the same time, there are reasons for concern.

What side would this system be on?

Moreover, who decides what position these AI systems take?

If no one decides, then how does the AI determine which side on a debate it takes?

These are relevant questions because if you have an AI that can win any debate, then the side it takes really matters. That can be used for good when countering dangerous disinformation, like those from the antivaxx crowd or hate groups like the Westboro Baptist Church. Like any tool, though, it can be weaponized for ill.

I can easily imagine political organizations getting a hold of these systems and trying to use them to benefit their agenda. I can also see some trying to use it to spam message boards, social media, and Reddit threads to derail certain movements or arguments. That’s a powerful tool and we can’t be certain that those using it will use it responsibly. That’s the dilemma with all technology. It can be used for good and for evil. With technology like artificial intelligence, the stakes are a lot higher, as are the perils. This technology may not be as dangerous as a more advanced AI, but it could be a precursor to just how disruptive this technology can be.

Leave a comment

Filed under Artificial Intelligence, futurism, technology

Artificial Intelligence Is Struggling With Racism (And The Larger Implications)

There’s no doubt that artificial intelligence will fundamentally change the world. Its potential is so vast that some even say it will be mankind’s last invention. Once it gets to a certain point, it won’t just be better at everything humans do. It may very well supplant humanity as the dominant life form on this planet. There are no shortage of movies that depict how dire that could be.

That said, it’s bound to go through some growing pains. Not all of those pains will involve a Skynet-level malfunction, but they will warrant concern.

At the moment, our artificial intelligence is still limited. It’s not dumb, but it’s very limited. It can do certain tasks very well, like play chess or win Jeopardy. It can’t apply that intelligence at a broader macro level like a human.

That still makes them useful and they’re still advancing rapidly. Artificial intelligence programs are used frequently for tasks like moderating comments sections and tracking sales data. The problem with these programs is that, since the AI is not generally intelligent, humans have to fill in the gaps. Since humans are flawed and bias, those traits sometimes find their way into the AI.

That’s what happened recently with YouTube’s comment moderation algorithms. As an aspiring YouTuber, I know how imperfect those algorithms can be. The intentions are noble. These AI programs are supposed to curtail hate speech. The internet needs that right now. Anyone who has ever visited 4chan knows that.

However, sometimes the AI systems are so narrow that they don’t ese the forest from the trees. That’s what happened recently when those systems mistook discussions about chess for racist language. Tech Xplore did an article on it and while it’s somewhat humorous on the surface, it’s also quite revealing.

Tech Xplore: AI May Mistake Chess Discussions as Racist Talk

“The Queen’s Gambit,” the recent TV mini-series about a chess master, may have stirred increased interest in chess, but a word to the wise: social media talk about game-piece colors could lead to misunderstandings, at least for hate-speech detection software.

That’s what a pair of Carnegie Mellon University researchers suspect happened to Antonio Radic, or “agadmator,” a Croatian chess player who hosts a popular YouTube channel. Last June, his account was blocked for “harmful and dangerous” content.

YouTube never provided an explanation and reinstated the channel within 24 hours, said Ashiqur R. KhudaBukhsh a project scientist in CMU’s Language Technologies Institute (LTI). It’s nevertheless possible that “black vs. white” talk during Radi?’s interview with Grandmaster Hikaru Nakamura triggered software that automatically detects racist language, he suggested.

Now, should we be concerned? Is it worrying that an AI with the backing of Google couldn’t surmise that simple terms like “black vs. white” were referring to chess and not race relations?

The short answer is not really.

The longer answer is not really, but we should learn important lessons from this.

The AI systems that moderate YouTube comments are nowhere near the kinds of systems we’d see in an artificial general intelligence. It’s like comparing a steam engine to a modern rocket. That said, we had to learn how to make a damn good steam engine before we could learn to make a decent rocket.

With something like advanced artificial intelligence, the margin for error is very small. You could even argue there is no margin for error. That’s why so many worry that such an AI could be an existential threat to humanity. If its too flawed to understand the difference between chess and racist rhetoric, then we could be in serious trouble.

The problem, in this case, isn’t with the nature of the AI. It’s with us, its creators. Since we humans are so flawed, racism being one of our worst flaws, it’s understandable that this sort of thing would find its way into our programming. It already has in a number of fields.

Again, those types of systems are limited and narrow. There’s a lot of room for human flaws to enter the system.

With advanced AI, those flaws could end up being extremely damaging. If too many of them find their way into a more advanced AI, we wouldn’t end up with a helpful, usable system. We’d end up with something like Skynet or Ultron. At that point, we’d be in serious trouble and we wouldn’t be able to rely on John Conner or the Avengers to save us.

We still have time. This latest issue with YouTube’s algorithms is minor, in the grand scheme of things, and fairly easy to correct. Once we get around to creating more advanced systems, though, we need to be aware of these flaws. We need to remember that any advanced AI we create will reflect our best and worst qualities. Let’s make sure our best win out in the long run.

Leave a comment

Filed under Artificial Intelligence, technology, YouTube

Artificial Intelligence Is Learning Law: Is Government Next?

It’s inevitable. As technology advances, certain industries are going to become obsolete. That’s why the horse-and-buggy industry is incredibly limited. It’s also why companies don’t make typewriters or LaserDisk movies anymore. Once better tech becomes available, an industry either adapts or disappears. Just ask Blockbuster.

Sometimes, it’s obvious that an industry is becoming obsolete. Again, just ask Blockbuster. As soon as something better, easier, and more convenient comes along, it’s only a matter of time before it takes over. However, it’s when things aren’t quite as obvious where more dramatic changes occur.

In terms of dramatic change, few things have the potential to generate more than artificial intelligence. I’ve highlighted that many times before, but a lot of that potential depends on advances that haven’t happened yet. They’re still likely to happen at some point, which may or may not be in my lifetime. They’re just not there yet.

That said, AI doesn’t have to become advanced on the level of Skynet or Hal 9000 to impact and/or disrupt major industries. The AI technology we have now is already having an impact. It may only be a narrow form of AI, which is AI that’s focused on performing a specific task, like playing chess. Its potential is still immense and some fields are feeling it more than others.

One industry that might feel it first is law. Now, at the risk of inspiring one too many lawyer jokes, I’m going to try and keep things general here. I’m also going to try and fit in some personal experience. I know some lawyers personally. I’ve been in law offices and I’ve seen how they work. You don’t have to be that much a visionary to understand how AI could change this industry entirely.

Recently, TechNews did a story on how artificial intelligence is learning basic legal operations and learning it quite well. Given the massive amounts of data and technicalities included in American law, a narrow AI is ideally suited to handle such tasks. However, I don’t think the piece fully grasps the implications.

TechNews: Lawyers Beware: Artificial Intelligence Is Learning Law – And Doing Frighteningly Well

AI or artificial intelligence is starting to find its footing in the legal field. The world is now on the brink of revolution in legal profession spearheaded with the extensive use of AI in the entire industry, specifically by the in-house lawyers.

Just like how email greatly changed the way people conduct their business on a daily basis, AI is also expected to become an ever-present force and an invaluable assistant to almost all lawyers.

But the million-dollar question now is, what does the future look like for AI as far as the legal industry is concerned? A much bigger question is, will AI soon replace real life lawyers?

These are not unreasonable questions. What will happen to the current legal industry if much of the legal grunt-work can be handled by an AI? What will happen to the industry when it’s no longer necessary to have a huge team of overpaid lawyers to conduct competent legal operations?

As someone who has been in his share of law offices, I can make a few educated guesses. I can easily imagine firms shrinking their office space, but expanding their operations. Most of the legal offices I’ve gone to dedicate 80 percent of their office space to storing documents and secure research material. Very little is left or necessary for the actual people doing the work.

The recent pandemic has only revealed that plenty of this work can be done form home or remotely. Some legal proceedings are even unfolding through Zoom calls, albeit with mixed results. It’s a step in that it undermines and disrupts the traditional model for handling the law. It also raises a much larger question that the TechNews article didn’t ask.

Once AI learns the law, then is learning government next?

It’s a natural progression. Governments make and administer laws. An AI that specializes in the law would also have to learn government, as well. A narrow AI might be able to process the general bureaucracy of a government, but what happens when those systems become more advanced?

I’m not just talking about a scenario where an AI becomes the government, which I’ve already speculated on. An AI that has perfect expertise in both law and government operations could have many less obvious effects. Inefficiencies that often go unnoticed in a bureaucracy are suddenly harder to overlook. Inconsistencies that rarely get fixed, due to that bureaucracy, can finally be remedied.

In theory, a sufficiently advanced AI, which need not be as intelligent as a human, could do more than just document legal and government proceedings. It could formulate new laws and policies on its own. Some may seem outrageous from a basic non-lawyer human perspective, but make perfect sense within a functioning legal system or government.

It may still seem like just another tool for lawyers to stay organized, but I think it could be more than that. If an AI makes both legal and government systems more efficient, then what will that mean for those in government? Would politicians be better able to implement their agenda if they have tools like AI at their disposal? Would that necessarily be a good thing?

This is where things get both tricky and political. No matter how confident you are in your political persuasions, the party you favor will not always be in power.

It may seem like politics is trending a certain way, but those trends change quickly. People who think their party is strong now can’t imagine a time when they’ll lose that strength. It happens regularly in any democracy.

Like it or not, your party will one day be out of power. When that happens, do you want the other party having a more efficient means of implementing their policies?

I’m sure everyone’s answer to that question will vary. What no one is certain of is how we’ll keep up with ever-improving AI systems, regardless of what industry they’re in. It’s one thing for a system to make it easier to stream movies or keep track of groceries. It’s quite another when it becomes intimately involved with our laws and our government.

The TechNews article expressed some concern, but only with respect to how it affects the current law industry. I believe AI, even if it’s focused only on law, will have a far larger impact. That’s not to say that AI will render law firms and governments obsolete.

If ever there was one domain in which foresight is critical, it’s this. Some industries can and should become obsolete. Others, like how we govern our society, need a more careful approach. We simply cannot afford our laws and our government to end up like Blockbuster.

Leave a comment

Filed under Artificial Intelligence, Current Events, human nature, Neuralink, politics, technology

Killer Robots, Drone Warfare, And How Artificial Intelligence Might Impact Both

920x515_charlene

On November 5, 2001, the history of warfare changed forever. On that date, an unmanned Predator drone armed with hellfire missiles killed Mohammed Atef, a known Al-Qaida military chief and the son-in-law to Osama Bin Laden. From a purely strategic standpoint, this was significant in that it proved the utility of a new kind of weapon system. In terms of the bigger picture, it marked the start of a new kind of warfare.

If the whole of human history has taught us anything, it’s that the course of that history changes when societies find new and devastating ways to wage war. In ancient times, to wage war, you needed to invest time and resources to train skilled warriors. That limited the scope and scale of war, although some did make the most of it.

Then, firearms came along and suddenly, you didn’t need a special warrior class. You just needed to give someone a gun, teach them how to use it, and organize them so that they could shoot in a unit. That raised both the killing power and the devastating scale of war. The rise of aircraft and bombers only compounded that.

In the 20th century, warfare became so advanced and so destructive that the large-scale wars of the past just aren’t feasible anymore. With the advent of nuclear weapons, the potential dangers of such a war are so great that no spoils are worth it anymore. In the past, I’ve even noted that the devastating power of nuclear weapons have had a positive impact on the world, albeit for distressing reasons.

Now, drone warfare has added a new complication. Today, drone strikes are such a common tactic that it barely makes the news. The only time they are noteworthy is when one of those strikes incurs heavy civilian casualties. It has also sparked serious legal questions when the targets of these strikes are American citizens. While these events are both tragic and distressing, there’s no going back.

Like gunpowder before it, the genie is out of the bottle. Warfare has evolved and will never be the same. If anything, the rise of combat drones will only accelerate the pace of change with respect to warfare. Like any weapon before it, some of that change will be negative, as civilian casualties often prove. However, there also potential benefits that could change more than just warfare.

Those benefits aren’t limited to keeping keep soldiers out of combat zones. From a cost standpoint, drones are significantly cheaper. A single manned F-22 Raptor costs approximately $150 million while a single combat drone costs about $16 million. That makes drones 15 times cheaper and you don’t need to be a combat ace to fly one.

However, those are just logistical benefits. It’s the potential that drones have in conjunction with advanced artificial intelligence that could make them every bit as influential as nuclear weapons. Make no mistake. There’s plenty of danger in that potential. There always is with advanced AI. I’ve even talked about some of those risks. Anyone who has seen a single “Terminator” movie understands those risks.

When it comes to warfare, though, risk tolerance tends to be more complicated than anything you see in the movies. The risks of AI and combat drones have already sparked concerns about killer robots in the military. As real as those risks are, there’s another side to that coin that rarely gets discussed.

Think back to any story involving a drone strike that killed civilians. There are plenty of incidents to reference. Those drones didn’t act on orders from Skynet. They were ordered by human military personnel, attempting to make tactical decision on whatever intelligence they had available at the time. The drones may have done the killing, but a human being gave the order.

To the credit of these highly trained men and women in the military, they’re still flawed humans at the end of the day. No matter how ethically they conduct themselves, they’re ability to assess, process, and judge a situation is limited. When those judgments have lives on the line, both the stakes and the burdens are immense.

Once more advanced artificial intelligence enters the picture, the dynamics for drone warfare changes considerably. This isn’t pure speculation. The United States Military has gone on record saying they’re looking for ways to integrate advanced AI into combat drones. While they stopped short of confirming they’re working on their own version of Skynet, the effort to merge AI and combat drones is underway.

In an overly-simplistic way, they basically confirmed they’re working on killer robots. They may not look like the Terminator or Ultron, but their function is similar. They’re programmed with a task and that task may or may not involve killing an enemy combatant. At some point, a combat drone is going to kill another human being purely based on AI.

That assumes it hasn’t already happened. It’s no secret that the United States Military maintains shadowy weapons programs that are often decades ahead of their time. Even if it hasn’t happened yet, it’s only a matter of time. Once an autonomous drone kills another human being, we’ll have officially entered another new era of warfare.

In this era, there are no human pilots directing combat drones from afar. There’s no human being pulling the trigger whenever a drone launches its lethal payload into a combat situation. The drones act on their own accord. They assess all the intel they have on hand, process it at speeds far beyond that of any human, and render decisions in an instant.

It sounds scary and it certainly is. Plenty of popular media, as well as respected public figures, paint a terrifying picture of killer robots killing without remorse or concern. However, those worst-case-scenarios overlook both the strategic and practical aspect of this technology.

In theory, a combat drone with sufficiently advanced artificial intelligence will be more effective than any human pilot could ever be in a military aircraft. It could fly better, carrying out maneuvers that would strain or outright kill even the most durable pilots. It could react better under stressful circumstances. It could even render better judgments that save more lives.

Imagine, for a moment, a combat drone with systems and abilities so refined that no human pilot or officer could hope to match it. This drone could fly into a war zone, analyze a situation, zero in on a target, and attack with such precision that there’s little to no collateral damage.

If it wanted to take a single person out, it could simply fire a high-powered laser that hits them right in the brain stem.

If it wants to take out someone hiding in a bunker, it could utilize a smart bullet or a rail gun that penetrates every level of shielding and impacts only a limited area.

If it wants to take out something bigger, it could coordinate with other drones to hit with traditional missiles in such a way that it had no hope of defending itself.

Granted, drones this advanced probably won’t be available on the outset. Every bit of new technology goes through a learning curve. Just look at the first firearms and combat planes for proof of that. It takes time, refinement, and incentive to make a weapons system work. Even before it’s perfected, it’ll still have an impact.

At the moment, the incentives are definitely there. Today, the general public has a very low tolerance for casualties on both sides of a conflict. The total casualties of the second Iraq War currently sit at 4,809 coalition forces and 150,000 Iraqis. While that’s only a fraction of the casualties suffered in the Vietnam War, most people still deem those losses unacceptable.

It’s no longer feasible, strategically or ethically, to just blow up an enemy and lay waste to the land around them. Neither politics nor logistics will allow it. In an era where terrorism and renegade militias pose the greatest threat, intelligence and precision matter. Human brains and muscle just won’t cut it in that environment. Combat drones, if properly refined, can do the job.

Please note that’s a big and critical if. Like nuclear weapons, this a technology that nobody in any country can afford to misuse. In the event that a combat drone AI develops into something akin to Skynet or Ultron, then the amount of death and destruction it could bring is incalculable. These systems are already designed to kill. Advanced AI will just make them better at killing than any human will ever be.

It’s a worst-case scenario, but one we’ve managed to avoid with nuclear weapons. With advanced combat drones, the benefits might be even greater than no large-scale wars on the level of Word War II. In a world where advanced combat drones keep terrorists and militias from ever becoming too big a threat, the potential benefits could be unprecedented.

Human beings have been waging bloody, brutal wars for their entire history. Nuclear weapons may have made the cost of large wars too high, but combat drones powered by AI may finally make it obsolete.

5 Comments

Filed under Artificial Intelligence, Current Events, futurism, technology