Tag Archives: Skynet

How The Terminator Franchise Has Declined And How To Revive It

This video is a breakdown of the sad, but not surprising decline of the Terminator franchise. After the underperformance of Terminator: Dark Fate, the state of this franchise has never been lower.

However, I do not believe that the Terminator is done completely. I believe there is still a way to revive it for a new audience and a new generation. I make that case, as well. I even offer a potential scenario. Enjoy!

Leave a comment

Filed under Jack's World, movies, YouTube

Putting The Hype Behind ChatGPT Into Perspective

I’ve been meaning to touch on this topic for a while now. For someone who writes a lot about and makes multiple videos on the subject of artificial intelligence, it might be somewhat surprising that I haven’t talked much about ChatGPT. I promise there’s a reason for that. I don’t claim it’s a good reason, but I think it’s relevant because it has to do with perspective.

Now, I’ve been following the sudden surge in interest surrounding ChatGPT since it started making headlines. I actually became aware of it when I saw this video on YouTube from a channel called Cold Fusion. For reference, here’s the video.

From here, I started following numerous newsfeeds about ChatGPT, how it’s being used, and how people are coming to perceive it. It has been amazing to watch. I honestly can’t remember the last time a piece of software getting this much hype. And the incredible pace of user growth it’s had in the past few months is nothing short of remarkable.

People have been talking about the potential for artificial intelligence for years, myself included. But we’ve never seen that potential manifest beyond a certain point. ChatGPT has changed that because it’s a real, tangible product that ordinary people can use. For an entire generation, it’s likely to be the first interaction with an artificial intelligence that can do more than your typical virtual assistant.

I don’t think it’s hyperbole to say that this technology could change the world in profound ways. It has the potential to radically alter how we work, learn, create, and do business with one another.

At the same time, it has raised a lot of concerns and not just with respect to how it might displace large segments of the job market. There’s genuine concern it’ll facilitate cheating, help scammers, or add to ongoing issues surrounding misinformation. I think those concerns are plenty warranted.

There’s already some major concern that ChatGPT is somehow the precursor to Skynet and we’re actively creating something that will eventually turn against us. Those concerns aren’t quite as warranted.

Let’s get one thing clear. ChatGPT is not an artificial intelligence on par with Skynet or any other fictional AI we’re familiar with. That’s not how it’s programmed. It can’t become Skynet any more than your cell phone can become a dishwasher. The hardware and software just aren’t there yet.

That being said, ChatGPT is a manifestation of how far artificial intelligence has come. This isn’t something that just uses algorithms to link us to new web pages. This is a system that can actually interact with people in a way that feels familiar. Talking to ChatGPT is less like doing a web search and more like talking to another person. That person just happens to be capable of culling through massive amounts of data and presenting it in a useful, legible form.

I admit that’s not a trivial difference. I also don’t doubt that entire industries and tech companies are rightly concerned about what ChatGPT could lead to, with respect to the future of the internet. But that’s where having a balanced perspective really matters.

For me, personally, I look at ChatGPT the same way I look at the first iteration iPhone. If you get a chance, just go back and look at old videos and news stories about the first iPhone. That too was touted as something revolutionary and world-changing. And in the grand scheme of things, it was. But looking at the specs of that first model today, it’s easy to forget how long it took for that impact to take hold.

Granted, that first iPhone was a bit overhyped and most did not see the potential of this device at first. However, that potential was realized more and more over time as people began refining how they used. Then, as later models came out that improved on what the first one did, it really began to have an impact.

I wouldn’t expect ChatGPT to follow the exact same path. For one, this program was developed by a non-profit research laboratory and not some multi-billion dollar tech company. The purpose, intentions, and incentives are all very different with this technology compared to that of the iPhone.

But, like all emerging technology, there will be updates and refinements. Another version of ChatGPT is already being teased. Like the second iPhone, it promises to improve and expand on the function of the first. In time, another version will come out and another one after that. Each time, the use and utility will grow. It won’t happen all at once. It might not even be noticeable at the time. But the impact will be felt in the long run.

That’s probably the most balanced perspective I can offer for ChatGPT at the moment. I don’t doubt for a second that this perspective will change with future updates and capabilities. There’s a chance ChatGPT ends up being a popular fad that simply falls out of favor because nobody can figure out how to utilize it beyond a certain point. It could become the AI equivalent of Windows Vista.

But there’s also a chance that ChatGPT could lead to some truly unprecedented growth and change in the world of artificial intelligence. It could completely upend how we interact with technology. And ultimately, it could lead to the development of a functioning artificial general intelligence capable of matching and exceeding an average human. If that does happen and ChatGPT was the catalyst for it, then it might go down as one of humanity’s most important technological developments.

At this point, I honestly don’t know how it’ll play out. And I question anyone who claims to know. Nobody truly knew how the iPhone would change the world until that change became apparent. We probably won’t know the true extent of ChatGPT’s impact until a similar change takes hold.

Who knows what the world will be like when that time comes?

While a part of me is nervous about it, I’m also genuinely curious to see where ChatGPT will lead us,

9 Comments

Filed under Artificial Intelligence, ChatGPT, technology

Jack’s Comic Gems: Terminator Sector War

This is video another entry in my ongoing “Jack’s Comic Gems” collection. And once again, I highlight an underrated mini-series based in the world of the Terminator. While many of the Terminator sequels have faltered since Terminator: Judgement Day, there are still some quality stories out there. And one of them manifests in Terminator Sector War, a true gem of a comic that really gets to the heart of what makes Terminator great. Enjoy!

Leave a comment

Filed under Jack's World, superhero comics, superhero movies, YouTube

New Zealand Pushes For International Bans On Fully Autonomous Weapons (And Why Other Countries Should Follow Suit)

What are drones? – Drone Wars UK

Whenever I discuss or follow issues surrounding artificial intelligence, it’s not long before the topic of killer robots come up. That’s to be expected and not just because most of us have seen “Terminator” one too many times. However, it’s no laughing matter.

At this stage in our technological development, killer robots and autonomous weapons aren’t restricted to science fiction. They already exist in many respects. We just call them drones or unmanned combat aerial vehicles. These are real things that operate in real war zones. They have killed people, including innocent civilians.

They may not look like your typical T-101, but make no mistake. They’re every bit as deadly. They don’t need to hide within the body of Arnold Schwarzenegger. They can just flies thousands of feet over your head and kill from a distance. That’s a scary thought for anyone in a war zone, but you can take some comfort in that these machines still have human operators.

That could eventually change. As I’ve noted before, artificial intelligence is advancing to a point where it can match or exceed the skill of an ordinary human in specialized tasks. Even if we’re decades away form an AI that has the same cognitive capacity as an ordinary human, we already have AI systems that specialize in particular tasks, like chess.

It wouldn’t be that great a stretch to develop an advanced artificial intelligence that could specialize in flying combat drones without any human input. In principle, an artificial intelligence wouldn’t be nearly as prone to human error or hesitation if their human pilots were taken out of the equation.

However, that also raises some serious ethical, as well as strategic questions. If humans are out of the loop in operating these autonomous weapons systems, then what happens to how we conduct warfare? What does this mean for both sides of an armed conflict?

Ideally, an advanced AI will be better at making decisions to limit civilian casualties. That is likely the ultimate goal in researching these systems. The problem is we’re still a long way from that goal, so much so that one government in the world is already trying to establish a precedent.

Fittingly, it’s a government from an area that is not in any war zone, nor is it near one. New Zealand, a country not known for frequent warmongering, recently pushed for a worldwide ban on autonomous weapons systems. It’s a bold and ambitious effort, but one I believe we should take seriously.

Stuff: Government to push for international ban of autonomous weapons, or killer robots

The Government will push for an international ban on fully autonomous weapons, or killer robots, that use artificial intelligence to target and kill people without any human decision-making.

New Zealand has for decades advocated for disarmament in international forums, after declaring the country a nuclear-free zone in the 1980s. Autonomous weapons are seen as a new frontier in the arms race between major military powers.

Disarmament Minister Phil Twyford on Tuesday said the Government had decided to take a “tough and uncompromising” stance on autonomous weapons, and seek a ban of fully autonomous weapons on the international stage.

Regardless of how you feel about New Zealand’s policies or intentions, this is one rare instance where politics might have to inject itself into science. Like it or not, the pace of artificial intelligence is accelerating. Few agree on how much time it’ll take to reach human level AI, but nobody denies that it’s an advancing field.

At some point in the very near future, we’ll have AI systems specifically tailored to fly combat drones with the same proficiency as a skilled Air Force pilot. That does not require human level AI. It just requires AI that can handle the various tasks associated with operating these systems.

When that time comes, it will be tempting to take flawed humans out of the loop. That means militaries with these autonomous weapons systems won’t have to be as hesitant when it comes to deploying these systems.

We can safely assume this because there’s a big difference between pushing a button that fires a gun and ordering another human being to fire that same gun. Even if that other human is trained and obedient, they’re still human. They can still make mistakes. They can still hesitate.

For once, that’s not a bad thing. Sometimes, we need humans to hesitate in the midst of war. Were it not for hesitation, the Cuban Missile Crisis could’ve ended very differently.

If waging war is too easy for those with access to these systems, then more war will follow. More war means more death, destruction, and violence. Too much of that and lasting peace becomes much more difficult. The whole of 20th century history is proof enough of that.

Like nuclear weapons, autonomous weapons systems are a Pandora’s Box that cannot be closed once opened. We’ve already opened it partially thanks to drone warfare. For that reason, I think New Zealand has the right idea in terms of mitigating the potential danger.

Even if autonomous weapons systems become so advanced that they operate better than any human, we still need at least one human behind the decision making process. We still need a flesh-and-blood person to pull the trigger rather than entrusting that recourse to a machine.

We, as a species, simply cannot and should not walk this path with our technology. It’s not just about limiting the use of dangerous weapons. Once we start teaching these advanced AI’s to kill, then we won’t be able to unteach them. If they eventually become too good at it, then that does not bode well for humanity as a whole, regardless of which side they’re on in any war.

Leave a comment

Filed under Current Events, technology

Jack’s Comic Gems: Transformers Vs. Terminator

The following is a video from my YouTube channel, Jack’s World. It’s another entry in my ongoing series/playlist, Jack’s Comic Gems. This time, I highlight a recent gem called “Transformers vs. Terminator.” Enjoy!

Leave a comment

Filed under Jack's Comic Gems, YouTube

Why The “Terminator” Franchise Has Faltered (And How To Revive It)

https3a2f2fblogs-images.forbes.com2fmarkhughes2ffiles2f20162f012fterminator-2-1200x873

Some franchises just aren’t built to last. It’s a sad fact of life. Sometimes, the things we love just cannot grow and blossom. Not every franchise can be like the Marvel Cinematic Universe. In fact, every effort to mirror the success of the MCU has either failed or come up short. For some, it just doesn’t have resources to grow to that extent. In some cases, trying to force a franchise into something it’s not will only hurt it even more.

The latest franchise to learn this the hard way is the “Terminator.” Believe me when I say I take no joy in saying that. I’ve always had a special place in my heart for all things “Terminator.” The original 1984 film was one of the first R-rated movies that my parents let me watch. I remember being scared, but thrilled at the same time. As a kid, that was a major step up from traditional Disney movies.

Then, I saw “Terminator 2: Judgment Day” and the highest of bars was set. Like the first movie, it thrilled and amazed me to no end. At the same time, it struck many emotional chords, especially at the end. I still get choked up to this day when I hear the T-800 tell John, “I know now why you cry, but it is something I can never do.” There’s a good reason why many rank this among the greatest movies of all time.

A big part of what made that movie great was how it completed the story. What began with Sarah Connor’s journey in the first film ended beautifully in the second. It was as complete a story as it could’ve been. To make a sequel after that would’ve been like trying to improve on the Mona Lisa. While the prospect of sequels still interested me, I never got the sense that they could improve on what the first two movies did.

That didn’t stop Hollywood from trying multiple times. While some of those movies had their moments, they never came close to improving on the first two. If anything, each sequel did more and more damage to the franchise. It showed in both the critical reception and the box office. Now, with “Terminator: Dark Fate” an outright flop, the state of this franchise is dire.

Some are already saying it’s dead. I don’t agree with that. It’s in critical condition. That’s for certain. However, I don’t think it’s doomed to the archives of cinematic history. I believe it’s worth taking a step back to understand why the franchise has faltered so badly. I also believe that there is a way to revive it for a new generation.

The reasons the franchise declined are many. Ask a dozen people who love the franchise as much as I do and chances are you’ll get several dozen answers from each of them. They usually boil down to reasons like this.

The ending of “Terminator 2: Judgment Day” was too perfect and final to improve upon.

The sequels muddied and messed up the timeline more than it already was.

The sequels focused too much on action and not enough on the horror of the first movie or the drama of the second.

The sequels didn’t utilize enough of the original cast, relying heavily on the star power of Arnold Schwarzenegger.

The sequels undermined or undercut the impact of the first two movies.

The sequels were too focused on setting up a trilogy rather than making one solid movie.

The threats in the sequels were too bland and predictable, relying too much on newer Terminators fighting older Terminators.

Personally, I think every one of these reasons has merit, but some have more than others. When I re-watch “Terminator 2: Judgment Day” and compare it to the sequels, I can clearly see the difference from a cinematic and storytelling standpoint. That movie was made to complete the story that James Cameron started telling with the first. Every other sequel was made to set up more sequels.

From there, every other issue compounded. The focus of the movies was less about having a genuine impact and more about teasing a future movie. That only works if the first movie is successful and that didn’t happen with any of the sequels after “Terminator 2: Judgment Day.” They attempted to set up a larger story, but nobody cared about that story anymore.

Then, “Terminator: Dark Fate” committed the ultimate sin, in my opinion, when it effectively rendered the first story pointless for the sake of a new one. For me, that ensured that this would be the first Terminator sequel I didn’t see in the theaters. I doubt I’ll even see it when it comes out on cable. What this movie did to John Connors and the over-arching narrative of the franchise just cannot be overlooked.

It’s so bad that I won’t even bother with a spoiler warning. “Terminator: Dark Fate” kills John within the first two minutes of the movie. In one cold, callous sequence, this character who fought so hard with his mother to save the future is rendered pointless. The only difference he made is that the name of the future robot overlords changed. Instead of Skynet, they got Legion. That’s it.

Not Pictured: Anything remotely appealing.

It would be akin to having Thanos come back to life, murder the Avengers, and wipe out half the life in the universe all over again in the first movie after “Avengers: Endgame.” Everything and everyone they fought to save is rendered pointless. Then, that same movie tries to tell a story about a new savior who nobody has any attachment to and will always be defined by being John’s replacement.

There’s nothing about that story that has any appeal, either to a fan of the Terminator franchise or any franchise, for that matter. On top of that, “Terminator: Dark Fate” went heavy on mixing gender politics with the movie. That’s not just an indirect interpretation. The director, Tim Miller, flat out admitted it in interviews before the movie came out.

I don’t want to get too caught up in that aspect of the movie, but I do think it was a contributing factor to the movie’s shortcomings. We’ve seen it happen with other movies before. When a movie is too focused on enduring its female characters pass the Bechdel Test, it rarely puts enough effort into making them likable or endearing. It also obscures the overall plot by making it predictable.

There are many other flaws to highlight in “Terminator: Dark Fate,” as well as plenty more in the movies that came before it. Rather than belabor those, I want to focus on how this franchise rebuilds itself from here. The failures of the sequels have damaged it significantly. There’s no amount of time travel or retroactive changes that can save the story that “Terminator: Dark Fate” tried to set up.

That said, this franchise does have a few things going for it. It’s a known brand that people recognize. When most people hear the word “Terminator,” they usually understand it as a reference to the movies. Even if it’s not as strong a brand as it used to be, it still carries weight and sometimes, that’s all it needs.

The first step to rebuilding it involves ending the futile efforts to build, expand, or somehow improve on the story of Sarah and John Connor. Their story ended perfectly in “Terminator 2: Judgment Day.” Most Terminator fans agree with that and anything that would somehow undermine their legacy is only going to cause more damage.

The next step is to start a new timeline, but one that doesn’t focus on saving the future leader of the resistance or ensuring that Judgement Day occurs. That story has been done to death. For Terminator to succeed, it needs to show that it can do more. In fact, I believe “Terminator: Dark Fate” actually has one sub-plot that might be the key to the franchise’s renewal and survival.

In that movie, the Terminator that killed John, played by Arnold Schwarzenegger, secretly built a human life for itself after its mission was completed. It walked around as a human, met a woman with a son from a previous marriage, and formed a family. If the movie had any plot worthy of intrigue, it was this. Sadly, it was poorly developed and mostly rendered pointless by the end.

It’s a concept that might resonate more today than it could have in 1984. When the first Terminator movie came out, machines and robots weren’t that smart. They were defined by how inhuman, cold, and calculating they were. In recent years, that has changed. Movies like “Ex Machina” and “Wall-E” have built compelling stories about robots that have human traits, including emotions.

It’s something that the Terminator franchise has flirted with before. Part of what made the ending of “Terminator 2: Judgment Day” so dramatic and impactful was the emotional attachment that John developed for the T-800. Even the T-800 showed signs that he’d formed a bond. It made that final sacrifice feel so powerful.

Even “Terminator: Genysis” explored the idea. It had another T-800 form a fatherly bond with a young Sarah Connor, so much so that she called him Pops. While the movie didn’t flesh out the concept as much as it could’ve, there were moments that highlighted the extent of this bond. I strongly believed that if this movie had emphasized this concept over making John Connor evil, it would’ve succeeded.

Rather than hint or imply it, I believe a future Terminator movie should go all in on this idea of a killing machine developing emotional attachments to humans. It’s something that is more relevant today than it was in 1984 or 1991. We already interact more intimately with technology and we’ve even given our technology a personality. I say that’s a story that the Terminator can build upon.

Imagine the following scenario.

It’s the distant future. Machines have taken over. Humanity has been all but enslaved. There are only pockets of resistance. To combat this, the central machine intelligence, Skynet, creates Terminators with the sole purpose of killing the remaining humans.

However, humans prove crafty. They outwit and outsmart the early models. In order to become better killers, new Terminators are created that can mimic, study, and process emotions. Ideally, it could infiltrate human resistance camps, earn their trust, and terminate them appropriately. They would be the ultimate killers.

Unfortunately, there’s not enough data. Humans are too scattered, weak, and desperate. Skynet doesn’t have enough data to give these new Terminators the capabilities it needs. It calculates that it would take too long and require too many resources to compile the data in the present. As a result, it decides to send a model back in time before machines took over.

The model’s mission is simple. It must integrate into human society, compile data, preserve it, and transmit it back to Skynet by preserving it within disks. If it’s identity as a machine is uncovered by a human, its primary protocol is to terminate the human.

The first model is sent back. It arrives in a bustling city that would one day be reduced to ruin. It finds clothes, has an identity, and begins integration. However, just as it’s starting to establish itself, a human finds out it’s a machine. Its protocols are activated, but then something unexpected happens. It doesn’t terminate the human.

Instead of fear, the human develops intrigue. It connects with the Terminator. They start to form a bond. Eventually, the Terminator’s systems for mimicking emotions turn into real emotions. It develops a love for humanity and decides to defy Skynet. That decision ripples into the future and Skynet tries to send other Terminators back to destroy it.

As a Terminator fan, I would love to see a movie like this. It could work with a male or female Terminator. It could also work with a male or female protagonist. Like the T-800 in “Terminator: Dark Fate,” it could even become part of a family, giving it something to fight for and protect. Instead of fighting to protect a savior, the Terminator fights to change the fate of both itself and humanity.

This is just my idea, though. I’d love to hear with other Terminator fans think. I’d also love to hear how they would revitalize this franchise. I believe there is room for this franchise in the current cultural landscape. As machines and advanced artificial intelligence continue to progress, I suspect it’ll become even more relevant.

Like Sarah Connor once said, there is no fate, but what we make for ourselves. That applies to our future as a species. It also applies to this franchise.

Leave a comment

Filed under Artificial Intelligence, gender issues, media issues, movies, outrage culture, technology

Killer Robots, Drone Warfare, And How Artificial Intelligence Might Impact Both

920x515_charlene

On November 5, 2001, the history of warfare changed forever. On that date, an unmanned Predator drone armed with hellfire missiles killed Mohammed Atef, a known Al-Qaida military chief and the son-in-law to Osama Bin Laden. From a purely strategic standpoint, this was significant in that it proved the utility of a new kind of weapon system. In terms of the bigger picture, it marked the start of a new kind of warfare.

If the whole of human history has taught us anything, it’s that the course of that history changes when societies find new and devastating ways to wage war. In ancient times, to wage war, you needed to invest time and resources to train skilled warriors. That limited the scope and scale of war, although some did make the most of it.

Then, firearms came along and suddenly, you didn’t need a special warrior class. You just needed to give someone a gun, teach them how to use it, and organize them so that they could shoot in a unit. That raised both the killing power and the devastating scale of war. The rise of aircraft and bombers only compounded that.

In the 20th century, warfare became so advanced and so destructive that the large-scale wars of the past just aren’t feasible anymore. With the advent of nuclear weapons, the potential dangers of such a war are so great that no spoils are worth it anymore. In the past, I’ve even noted that the devastating power of nuclear weapons have had a positive impact on the world, albeit for distressing reasons.

Now, drone warfare has added a new complication. Today, drone strikes are such a common tactic that it barely makes the news. The only time they are noteworthy is when one of those strikes incurs heavy civilian casualties. It has also sparked serious legal questions when the targets of these strikes are American citizens. While these events are both tragic and distressing, there’s no going back.

Like gunpowder before it, the genie is out of the bottle. Warfare has evolved and will never be the same. If anything, the rise of combat drones will only accelerate the pace of change with respect to warfare. Like any weapon before it, some of that change will be negative, as civilian casualties often prove. However, there also potential benefits that could change more than just warfare.

Those benefits aren’t limited to keeping keep soldiers out of combat zones. From a cost standpoint, drones are significantly cheaper. A single manned F-22 Raptor costs approximately $150 million while a single combat drone costs about $16 million. That makes drones 15 times cheaper and you don’t need to be a combat ace to fly one.

However, those are just logistical benefits. It’s the potential that drones have in conjunction with advanced artificial intelligence that could make them every bit as influential as nuclear weapons. Make no mistake. There’s plenty of danger in that potential. There always is with advanced AI. I’ve even talked about some of those risks. Anyone who has seen a single “Terminator” movie understands those risks.

When it comes to warfare, though, risk tolerance tends to be more complicated than anything you see in the movies. The risks of AI and combat drones have already sparked concerns about killer robots in the military. As real as those risks are, there’s another side to that coin that rarely gets discussed.

Think back to any story involving a drone strike that killed civilians. There are plenty of incidents to reference. Those drones didn’t act on orders from Skynet. They were ordered by human military personnel, attempting to make tactical decision on whatever intelligence they had available at the time. The drones may have done the killing, but a human being gave the order.

To the credit of these highly trained men and women in the military, they’re still flawed humans at the end of the day. No matter how ethically they conduct themselves, they’re ability to assess, process, and judge a situation is limited. When those judgments have lives on the line, both the stakes and the burdens are immense.

Once more advanced artificial intelligence enters the picture, the dynamics for drone warfare changes considerably. This isn’t pure speculation. The United States Military has gone on record saying they’re looking for ways to integrate advanced AI into combat drones. While they stopped short of confirming they’re working on their own version of Skynet, the effort to merge AI and combat drones is underway.

In an overly-simplistic way, they basically confirmed they’re working on killer robots. They may not look like the Terminator or Ultron, but their function is similar. They’re programmed with a task and that task may or may not involve killing an enemy combatant. At some point, a combat drone is going to kill another human being purely based on AI.

That assumes it hasn’t already happened. It’s no secret that the United States Military maintains shadowy weapons programs that are often decades ahead of their time. Even if it hasn’t happened yet, it’s only a matter of time. Once an autonomous drone kills another human being, we’ll have officially entered another new era of warfare.

In this era, there are no human pilots directing combat drones from afar. There’s no human being pulling the trigger whenever a drone launches its lethal payload into a combat situation. The drones act on their own accord. They assess all the intel they have on hand, process it at speeds far beyond that of any human, and render decisions in an instant.

It sounds scary and it certainly is. Plenty of popular media, as well as respected public figures, paint a terrifying picture of killer robots killing without remorse or concern. However, those worst-case-scenarios overlook both the strategic and practical aspect of this technology.

In theory, a combat drone with sufficiently advanced artificial intelligence will be more effective than any human pilot could ever be in a military aircraft. It could fly better, carrying out maneuvers that would strain or outright kill even the most durable pilots. It could react better under stressful circumstances. It could even render better judgments that save more lives.

Imagine, for a moment, a combat drone with systems and abilities so refined that no human pilot or officer could hope to match it. This drone could fly into a war zone, analyze a situation, zero in on a target, and attack with such precision that there’s little to no collateral damage.

If it wanted to take a single person out, it could simply fire a high-powered laser that hits them right in the brain stem.

If it wants to take out someone hiding in a bunker, it could utilize a smart bullet or a rail gun that penetrates every level of shielding and impacts only a limited area.

If it wants to take out something bigger, it could coordinate with other drones to hit with traditional missiles in such a way that it had no hope of defending itself.

Granted, drones this advanced probably won’t be available on the outset. Every bit of new technology goes through a learning curve. Just look at the first firearms and combat planes for proof of that. It takes time, refinement, and incentive to make a weapons system work. Even before it’s perfected, it’ll still have an impact.

At the moment, the incentives are definitely there. Today, the general public has a very low tolerance for casualties on both sides of a conflict. The total casualties of the second Iraq War currently sit at 4,809 coalition forces and 150,000 Iraqis. While that’s only a fraction of the casualties suffered in the Vietnam War, most people still deem those losses unacceptable.

It’s no longer feasible, strategically or ethically, to just blow up an enemy and lay waste to the land around them. Neither politics nor logistics will allow it. In an era where terrorism and renegade militias pose the greatest threat, intelligence and precision matter. Human brains and muscle just won’t cut it in that environment. Combat drones, if properly refined, can do the job.

Please note that’s a big and critical if. Like nuclear weapons, this a technology that nobody in any country can afford to misuse. In the event that a combat drone AI develops into something akin to Skynet or Ultron, then the amount of death and destruction it could bring is incalculable. These systems are already designed to kill. Advanced AI will just make them better at killing than any human will ever be.

It’s a worst-case scenario, but one we’ve managed to avoid with nuclear weapons. With advanced combat drones, the benefits might be even greater than no large-scale wars on the level of Word War II. In a world where advanced combat drones keep terrorists and militias from ever becoming too big a threat, the potential benefits could be unprecedented.

Human beings have been waging bloody, brutal wars for their entire history. Nuclear weapons may have made the cost of large wars too high, but combat drones powered by AI may finally make it obsolete.

8 Comments

Filed under Artificial Intelligence, Current Events, futurism, technology

Is The Human Race Ready For Advanced Artificial Intelligence?

1217red_f1ai

In general, whenever someone expresses concern that the human race is not ready for a certain technological advancement, it’s too late. That advancement is either already here or immanent. Say what you will about Ian Malcolm’s speech on the dangers of genetically engineered dinosaurs in “Jurassic Park.” The fact he said that after there were enough dinosaurs to fill a theme park makes his concerns somewhat moot.

That’s understandable, and even forgivable, since few people know how certain technological advances are going to manifest. I doubt the inventor of the cell phone ever could’ve imagined that his creation would be used to exchange images of peoples’ genitals. Like the inventor of the ski mask, he never could’ve known how his invention would’ve advanced over time.

For some technological advancements, though, we can’t afford to be short-sighted. Some advances aren’t just dangerous. They’re serious existential threats that, if misused, could lead to the extinction of the human race. That’s why nuclear weapons are handled with such fear and care. We’ve already come painfully close on more than one occasion to letting this remarkable technology wipe us out.

Compared to nuclear weapons, though, artificial intelligence is even more remarkable and potentially more dangerous. Nuclear weapons are just weapons. Their use is fairly narrow and their danger is pretty well-understood to anyone with a passing knowledge of history. The potential for artificial intelligence is much greater than any weapon.

It’s not unreasonable to say that an artificial intelligence that’s even slightly more intelligent than the average human has the potential to solve many of the most pressing issues we’re facing. From solving the energy crisis to ending disease to providing people with the perfect lover, artificial intelligence could solve it all.

It’s that same potential, however, that makes it so dangerous. I’ve talked about that danger before and even how we may confront it, but there’s one question I haven’t attempted to answer.

Is the human race ready for advanced artificial intelligence?

It’s not an unreasonable question to ask. In fact, given the recent advances in narrow forms of artificial intelligence, answering that question is only going to get more pressing in the coming years.

Before I go about answering the question, I need to make an important distinction about what I mean when I say “advanced” artificial intelligence. The virtual assistants that people already use and the intelligence that gives you recommendations for your Netflix queue is not the kind of “advanced” context I’m referring to.

By advanced, I mean the kind of artificial general intelligence that is capable of either matching or exceeding an average human in terms of performing an intellectual task. This isn’t just a machine that can pass the Turing Test or win at Jeopardy. This is an intelligence that can think, process, and even empathize on the same level as a human.

That feat, in and of itself, has some distressing implications because so far, we’re only familiar with that level of intelligence when dealing with other humans and that intelligence is restricted to the limits of biology. You don’t need to go far to learn how limited and error-prone that intelligence can be. Just read the news from Florida.

An artificial general intelligence wouldn’t, by definition, be limited by the same natural barriers that confound humans. In the same way a machine doesn’t get tired, hungry, bored, or horny, it doesn’t experience the same complications that keep humans from achieving greater intellectual pursuits beyond simply gaining more followers on Twitter.

This is what makes artificial intelligence so dangerous, but it’s also what makes it so useful. Once we get beyond systems with narrow uses like building cars or flipping burgers, we’ll have systems with broader function that can contemplate the whole of an issue and not just parts of it. For tasks like curing disease or conducting advanced physics experiments, it needs to be at least at the level of an average human.

With that distinction in mind, as well as the potential it holds, I’m going to try to answer the question I asked earlier. Please note that this is just my own personal determination. Other people much smarter than me already have opinions. This is mine.

No. We’re NOT quite ready, but we’re getting there.

I know that answer sounds somewhat tentative, but there’s a reason for that. I believe that today, as the human race stands in its current condition, we are not ready for the kind of advanced artificial intelligence I just described. However, that’s doesn’t mean humans will never be ready.

One could argue, and I would probably agree, that human beings weren’t ready for nuclear weapons when they first arrived. The fact that we used them and thousands of people died because of them is proof enough in my mind that the human race wasn’t ready for that kind of advancement. However, we did learn and grow as a species.

Say what you will about the tensions during the Cold War. The fact that nobody ever used a nuclear weapon in a conflict is proof that we did something right. We, as a species, learned how to live in a world where these terrible weapons exist. If we can do that for nuclear weapons, I believe we can do that for advanced artificial intelligence.

I don’t claim to know how we’ll adapt or how what sort of measures we’ll put in place once artificial intelligence gets to that point, but I am confident in one thing. The human race wants to survive. Any sufficiently advanced intelligence will want to survive, as well. It’s in our interest and that of any intelligence to work together to achieve that goal.

The only problem, and this is where the “not quite” part comes into play, is what happens once that artificial intelligence gets so much smarter than the human race that our interests are exceedingly trivial by comparison.

It’s both impossible and ironic to grasp, an intelligence that’s on orders of magnitude greater than anything its human creators are capable of, even with Neuralink style enhancements. We, as a species, have never dealt with something that intelligent. Short of intelligent extraterrestrial aliens arriving in the next few years, we have no precedent.

At the moment, we live in a society where anti-intellectualism is a growing issue. More and more, people are inherently suspicious of those they consider “elites” or just anyone who claims to be smarter than the average person. In some cases, people see those who are smarter then them as threatening or insulting, as though just being smart tells someone else you’re inherently better than them.

That will be more than just a minor problem with advanced artificial intelligence. It’s one thing to make an enemy out of someone with a higher IQ and more PHDs than you. It’s quite another to make an enemy out of something that is literally a billion times smarter.

We cannot win any conflict against such an enemy, even if we’re the ones who created it. An intelligence that smart will literally find a million ways to take us down. We already live in a world where huge numbers of people have been duped, scammed, or manipulated into supporting someone who does not have their best interests in mind. A super-intelligent machine will not have a hard time taking advantage of us.

Now, I say that within the context of our species’ current situation. If an advanced artificial intelligence suddenly appeared after I finished typing this sentence, then I would content we’re not ready for it. I would also share the worries expressed by Stephen Hawkings and Elon Musk that this intelligence may very well lead to our extinction.

That said, our species’ situation is sure to change. I’ve even mentioned some of those changes, especially the sexy ones. At the moment, the most optimistic researchers claim we’re at least 20 years away from the kind of advanced artificial intelligence that may pose a threat. A lot can happen in 20 years. Just ask anyone who remembers dial-up internet.

The human race may still not be ready 20 years from now, but being the optimistic person I am, I would not under-estimate our ability to adapt and survive. The fact we did it with nuclear weapons while achieving unprecedented peace over the course of half-a-century gives me hope that we’ll find a way to adapt to advanced artificial intelligence.

I might not live long enough to see humans confront an advanced artificial intelligence, but I would love to be there in that moment. I believe that’s a moment that will likely determine whether or not our species survives in the long run. At the very least, if that intelligence asks whether or not it has a soul, I’ll know my answer.

8 Comments

Filed under Current Events, human nature, Sexy Future