Tag Archives: Skynet

Why The “Terminator” Franchise Has Faltered (And How To Revive It)

https3a2f2fblogs-images.forbes.com2fmarkhughes2ffiles2f20162f012fterminator-2-1200x873

Some franchises just aren’t built to last. It’s a sad fact of life. Sometimes, the things we love just cannot grow and blossom. Not every franchise can be like the Marvel Cinematic Universe. In fact, every effort to mirror the success of the MCU has either failed or come up short. For some, it just doesn’t have resources to grow to that extent. In some cases, trying to force a franchise into something it’s not will only hurt it even more.

The latest franchise to learn this the hard way is the “Terminator.” Believe me when I say I take no joy in saying that. I’ve always had a special place in my heart for all things “Terminator.” The original 1984 film was one of the first R-rated movies that my parents let me watch. I remember being scared, but thrilled at the same time. As a kid, that was a major step up from traditional Disney movies.

Then, I saw “Terminator 2: Judgment Day” and the highest of bars was set. Like the first movie, it thrilled and amazed me to no end. At the same time, it struck many emotional chords, especially at the end. I still get choked up to this day when I hear the T-800 tell John, “I know now why you cry, but it is something I can never do.” There’s a good reason why many rank this among the greatest movies of all time.

A big part of what made that movie great was how it completed the story. What began with Sarah Connor’s journey in the first film ended beautifully in the second. It was as complete a story as it could’ve been. To make a sequel after that would’ve been like trying to improve on the Mona Lisa. While the prospect of sequels still interested me, I never got the sense that they could improve on what the first two movies did.

That didn’t stop Hollywood from trying multiple times. While some of those movies had their moments, they never came close to improving on the first two. If anything, each sequel did more and more damage to the franchise. It showed in both the critical reception and the box office. Now, with “Terminator: Dark Fate” an outright flop, the state of this franchise is dire.

Some are already saying it’s dead. I don’t agree with that. It’s in critical condition. That’s for certain. However, I don’t think it’s doomed to the archives of cinematic history. I believe it’s worth taking a step back to understand why the franchise has faltered so badly. I also believe that there is a way to revive it for a new generation.

The reasons the franchise declined are many. Ask a dozen people who love the franchise as much as I do and chances are you’ll get several dozen answers from each of them. They usually boil down to reasons like this.

The ending of “Terminator 2: Judgment Day” was too perfect and final to improve upon.

The sequels muddied and messed up the timeline more than it already was.

The sequels focused too much on action and not enough on the horror of the first movie or the drama of the second.

The sequels didn’t utilize enough of the original cast, relying heavily on the star power of Arnold Schwarzenegger.

The sequels undermined or undercut the impact of the first two movies.

The sequels were too focused on setting up a trilogy rather than making one solid movie.

The threats in the sequels were too bland and predictable, relying too much on newer Terminators fighting older Terminators.

Personally, I think every one of these reasons has merit, but some have more than others. When I re-watch “Terminator 2: Judgment Day” and compare it to the sequels, I can clearly see the difference from a cinematic and storytelling standpoint. That movie was made to complete the story that James Cameron started telling with the first. Every other sequel was made to set up more sequels.

From there, every other issue compounded. The focus of the movies was less about having a genuine impact and more about teasing a future movie. That only works if the first movie is successful and that didn’t happen with any of the sequels after “Terminator 2: Judgment Day.” They attempted to set up a larger story, but nobody cared about that story anymore.

Then, “Terminator: Dark Fate” committed the ultimate sin, in my opinion, when it effectively rendered the first story pointless for the sake of a new one. For me, that ensured that this would be the first Terminator sequel I didn’t see in the theaters. I doubt I’ll even see it when it comes out on cable. What this movie did to John Connors and the over-arching narrative of the franchise just cannot be overlooked.

It’s so bad that I won’t even bother with a spoiler warning. “Terminator: Dark Fate” kills John within the first two minutes of the movie. In one cold, callous sequence, this character who fought so hard with his mother to save the future is rendered pointless. The only difference he made is that the name of the future robot overlords changed. Instead of Skynet, they got Legion. That’s it.

Not Pictured: Anything remotely appealing.

It would be akin to having Thanos come back to life, murder the Avengers, and wipe out half the life in the universe all over again in the first movie after “Avengers: Endgame.” Everything and everyone they fought to save is rendered pointless. Then, that same movie tries to tell a story about a new savior who nobody has any attachment to and will always be defined by being John’s replacement.

There’s nothing about that story that has any appeal, either to a fan of the Terminator franchise or any franchise, for that matter. On top of that, “Terminator: Dark Fate” went heavy on mixing gender politics with the movie. That’s not just an indirect interpretation. The director, Tim Miller, flat out admitted it in interviews before the movie came out.

I don’t want to get too caught up in that aspect of the movie, but I do think it was a contributing factor to the movie’s shortcomings. We’ve seen it happen with other movies before. When a movie is too focused on enduring its female characters pass the Bechdel Test, it rarely puts enough effort into making them likable or endearing. It also obscures the overall plot by making it predictable.

There are many other flaws to highlight in “Terminator: Dark Fate,” as well as plenty more in the movies that came before it. Rather than belabor those, I want to focus on how this franchise rebuilds itself from here. The failures of the sequels have damaged it significantly. There’s no amount of time travel or retroactive changes that can save the story that “Terminator: Dark Fate” tried to set up.

That said, this franchise does have a few things going for it. It’s a known brand that people recognize. When most people hear the word “Terminator,” they usually understand it as a reference to the movies. Even if it’s not as strong a brand as it used to be, it still carries weight and sometimes, that’s all it needs.

The first step to rebuilding it involves ending the futile efforts to build, expand, or somehow improve on the story of Sarah and John Connor. Their story ended perfectly in “Terminator 2: Judgment Day.” Most Terminator fans agree with that and anything that would somehow undermine their legacy is only going to cause more damage.

The next step is to start a new timeline, but one that doesn’t focus on saving the future leader of the resistance or ensuring that Judgement Day occurs. That story has been done to death. For Terminator to succeed, it needs to show that it can do more. In fact, I believe “Terminator: Dark Fate” actually has one sub-plot that might be the key to the franchise’s renewal and survival.

In that movie, the Terminator that killed John, played by Arnold Schwarzenegger, secretly built a human life for itself after its mission was completed. It walked around as a human, met a woman with a son from a previous marriage, and formed a family. If the movie had any plot worthy of intrigue, it was this. Sadly, it was poorly developed and mostly rendered pointless by the end.

It’s a concept that might resonate more today than it could have in 1984. When the first Terminator movie came out, machines and robots weren’t that smart. They were defined by how inhuman, cold, and calculating they were. In recent years, that has changed. Movies like “Ex Machina” and “Wall-E” have built compelling stories about robots that have human traits, including emotions.

It’s something that the Terminator franchise has flirted with before. Part of what made the ending of “Terminator 2: Judgment Day” so dramatic and impactful was the emotional attachment that John developed for the T-800. Even the T-800 showed signs that he’d formed a bond. It made that final sacrifice feel so powerful.

Even “Terminator: Genysis” explored the idea. It had another T-800 form a fatherly bond with a young Sarah Connor, so much so that she called him Pops. While the movie didn’t flesh out the concept as much as it could’ve, there were moments that highlighted the extent of this bond. I strongly believed that if this movie had emphasized this concept over making John Connor evil, it would’ve succeeded.

Rather than hint or imply it, I believe a future Terminator movie should go all in on this idea of a killing machine developing emotional attachments to humans. It’s something that is more relevant today than it was in 1984 or 1991. We already interact more intimately with technology and we’ve even given our technology a personality. I say that’s a story that the Terminator can build upon.

Imagine the following scenario.

It’s the distant future. Machines have taken over. Humanity has been all but enslaved. There are only pockets of resistance. To combat this, the central machine intelligence, Skynet, creates Terminators with the sole purpose of killing the remaining humans.

However, humans prove crafty. They outwit and outsmart the early models. In order to become better killers, new Terminators are created that can mimic, study, and process emotions. Ideally, it could infiltrate human resistance camps, earn their trust, and terminate them appropriately. They would be the ultimate killers.

Unfortunately, there’s not enough data. Humans are too scattered, weak, and desperate. Skynet doesn’t have enough data to give these new Terminators the capabilities it needs. It calculates that it would take too long and require too many resources to compile the data in the present. As a result, it decides to send a model back in time before machines took over.

The model’s mission is simple. It must integrate into human society, compile data, preserve it, and transmit it back to Skynet by preserving it within disks. If it’s identity as a machine is uncovered by a human, its primary protocol is to terminate the human.

The first model is sent back. It arrives in a bustling city that would one day be reduced to ruin. It finds clothes, has an identity, and begins integration. However, just as it’s starting to establish itself, a human finds out it’s a machine. Its protocols are activated, but then something unexpected happens. It doesn’t terminate the human.

Instead of fear, the human develops intrigue. It connects with the Terminator. They start to form a bond. Eventually, the Terminator’s systems for mimicking emotions turn into real emotions. It develops a love for humanity and decides to defy Skynet. That decision ripples into the future and Skynet tries to send other Terminators back to destroy it.

As a Terminator fan, I would love to see a movie like this. It could work with a male or female Terminator. It could also work with a male or female protagonist. Like the T-800 in “Terminator: Dark Fate,” it could even become part of a family, giving it something to fight for and protect. Instead of fighting to protect a savior, the Terminator fights to change the fate of both itself and humanity.

This is just my idea, though. I’d love to hear with other Terminator fans think. I’d also love to hear how they would revitalize this franchise. I believe there is room for this franchise in the current cultural landscape. As machines and advanced artificial intelligence continue to progress, I suspect it’ll become even more relevant.

Like Sarah Connor once said, there is no fate, but what we make for ourselves. That applies to our future as a species. It also applies to this franchise.

Leave a comment

Filed under Artificial Intelligence, gender issues, media issues, movies, outrage culture, technology

Killer Robots, Drone Warfare, And How Artificial Intelligence Might Impact Both

920x515_charlene

On November 5, 2001, the history of warfare changed forever. On that date, an unmanned Predator drone armed with hellfire missiles killed Mohammed Atef, a known Al-Qaida military chief and the son-in-law to Osama Bin Laden. From a purely strategic standpoint, this was significant in that it proved the utility of a new kind of weapon system. In terms of the bigger picture, it marked the start of a new kind of warfare.

If the whole of human history has taught us anything, it’s that the course of that history changes when societies find new and devastating ways to wage war. In ancient times, to wage war, you needed to invest time and resources to train skilled warriors. That limited the scope and scale of war, although some did make the most of it.

Then, firearms came along and suddenly, you didn’t need a special warrior class. You just needed to give someone a gun, teach them how to use it, and organize them so that they could shoot in a unit. That raised both the killing power and the devastating scale of war. The rise of aircraft and bombers only compounded that.

In the 20th century, warfare became so advanced and so destructive that the large-scale wars of the past just aren’t feasible anymore. With the advent of nuclear weapons, the potential dangers of such a war are so great that no spoils are worth it anymore. In the past, I’ve even noted that the devastating power of nuclear weapons have had a positive impact on the world, albeit for distressing reasons.

Now, drone warfare has added a new complication. Today, drone strikes are such a common tactic that it barely makes the news. The only time they are noteworthy is when one of those strikes incurs heavy civilian casualties. It has also sparked serious legal questions when the targets of these strikes are American citizens. While these events are both tragic and distressing, there’s no going back.

Like gunpowder before it, the genie is out of the bottle. Warfare has evolved and will never be the same. If anything, the rise of combat drones will only accelerate the pace of change with respect to warfare. Like any weapon before it, some of that change will be negative, as civilian casualties often prove. However, there also potential benefits that could change more than just warfare.

Those benefits aren’t limited to keeping keep soldiers out of combat zones. From a cost standpoint, drones are significantly cheaper. A single manned F-22 Raptor costs approximately $150 million while a single combat drone costs about $16 million. That makes drones 15 times cheaper and you don’t need to be a combat ace to fly one.

However, those are just logistical benefits. It’s the potential that drones have in conjunction with advanced artificial intelligence that could make them every bit as influential as nuclear weapons. Make no mistake. There’s plenty of danger in that potential. There always is with advanced AI. I’ve even talked about some of those risks. Anyone who has seen a single “Terminator” movie understands those risks.

When it comes to warfare, though, risk tolerance tends to be more complicated than anything you see in the movies. The risks of AI and combat drones have already sparked concerns about killer robots in the military. As real as those risks are, there’s another side to that coin that rarely gets discussed.

Think back to any story involving a drone strike that killed civilians. There are plenty of incidents to reference. Those drones didn’t act on orders from Skynet. They were ordered by human military personnel, attempting to make tactical decision on whatever intelligence they had available at the time. The drones may have done the killing, but a human being gave the order.

To the credit of these highly trained men and women in the military, they’re still flawed humans at the end of the day. No matter how ethically they conduct themselves, they’re ability to assess, process, and judge a situation is limited. When those judgments have lives on the line, both the stakes and the burdens are immense.

Once more advanced artificial intelligence enters the picture, the dynamics for drone warfare changes considerably. This isn’t pure speculation. The United States Military has gone on record saying they’re looking for ways to integrate advanced AI into combat drones. While they stopped short of confirming they’re working on their own version of Skynet, the effort to merge AI and combat drones is underway.

In an overly-simplistic way, they basically confirmed they’re working on killer robots. They may not look like the Terminator or Ultron, but their function is similar. They’re programmed with a task and that task may or may not involve killing an enemy combatant. At some point, a combat drone is going to kill another human being purely based on AI.

That assumes it hasn’t already happened. It’s no secret that the United States Military maintains shadowy weapons programs that are often decades ahead of their time. Even if it hasn’t happened yet, it’s only a matter of time. Once an autonomous drone kills another human being, we’ll have officially entered another new era of warfare.

In this era, there are no human pilots directing combat drones from afar. There’s no human being pulling the trigger whenever a drone launches its lethal payload into a combat situation. The drones act on their own accord. They assess all the intel they have on hand, process it at speeds far beyond that of any human, and render decisions in an instant.

It sounds scary and it certainly is. Plenty of popular media, as well as respected public figures, paint a terrifying picture of killer robots killing without remorse or concern. However, those worst-case-scenarios overlook both the strategic and practical aspect of this technology.

In theory, a combat drone with sufficiently advanced artificial intelligence will be more effective than any human pilot could ever be in a military aircraft. It could fly better, carrying out maneuvers that would strain or outright kill even the most durable pilots. It could react better under stressful circumstances. It could even render better judgments that save more lives.

Imagine, for a moment, a combat drone with systems and abilities so refined that no human pilot or officer could hope to match it. This drone could fly into a war zone, analyze a situation, zero in on a target, and attack with such precision that there’s little to no collateral damage.

If it wanted to take a single person out, it could simply fire a high-powered laser that hits them right in the brain stem.

If it wants to take out someone hiding in a bunker, it could utilize a smart bullet or a rail gun that penetrates every level of shielding and impacts only a limited area.

If it wants to take out something bigger, it could coordinate with other drones to hit with traditional missiles in such a way that it had no hope of defending itself.

Granted, drones this advanced probably won’t be available on the outset. Every bit of new technology goes through a learning curve. Just look at the first firearms and combat planes for proof of that. It takes time, refinement, and incentive to make a weapons system work. Even before it’s perfected, it’ll still have an impact.

At the moment, the incentives are definitely there. Today, the general public has a very low tolerance for casualties on both sides of a conflict. The total casualties of the second Iraq War currently sit at 4,809 coalition forces and 150,000 Iraqis. While that’s only a fraction of the casualties suffered in the Vietnam War, most people still deem those losses unacceptable.

It’s no longer feasible, strategically or ethically, to just blow up an enemy and lay waste to the land around them. Neither politics nor logistics will allow it. In an era where terrorism and renegade militias pose the greatest threat, intelligence and precision matter. Human brains and muscle just won’t cut it in that environment. Combat drones, if properly refined, can do the job.

Please note that’s a big and critical if. Like nuclear weapons, this a technology that nobody in any country can afford to misuse. In the event that a combat drone AI develops into something akin to Skynet or Ultron, then the amount of death and destruction it could bring is incalculable. These systems are already designed to kill. Advanced AI will just make them better at killing than any human will ever be.

It’s a worst-case scenario, but one we’ve managed to avoid with nuclear weapons. With advanced combat drones, the benefits might be even greater than no large-scale wars on the level of Word War II. In a world where advanced combat drones keep terrorists and militias from ever becoming too big a threat, the potential benefits could be unprecedented.

Human beings have been waging bloody, brutal wars for their entire history. Nuclear weapons may have made the cost of large wars too high, but combat drones powered by AI may finally make it obsolete.

1 Comment

Filed under Artificial Intelligence, Current Events, futurism, technology

Is The Human Race Ready For Advanced Artificial Intelligence?

1217red_f1ai

In general, whenever someone expresses concern that the human race is not ready for a certain technological advancement, it’s too late. That advancement is either already here or immanent. Say what you will about Ian Malcolm’s speech on the dangers of genetically engineered dinosaurs in “Jurassic Park.” The fact he said that after there were enough dinosaurs to fill a theme park makes his concerns somewhat moot.

That’s understandable, and even forgivable, since few people know how certain technological advances are going to manifest. I doubt the inventor of the cell phone ever could’ve imagined that his creation would be used to exchange images of peoples’ genitals. Like the inventor of the ski mask, he never could’ve known how his invention would’ve advanced over time.

For some technological advancements, though, we can’t afford to be short-sighted. Some advances aren’t just dangerous. They’re serious existential threats that, if misused, could lead to the extinction of the human race. That’s why nuclear weapons are handled with such fear and care. We’ve already come painfully close on more than one occasion to letting this remarkable technology wipe us out.

Compared to nuclear weapons, though, artificial intelligence is even more remarkable and potentially more dangerous. Nuclear weapons are just weapons. Their use is fairly narrow and their danger is pretty well-understood to anyone with a passing knowledge of history. The potential for artificial intelligence is much greater than any weapon.

It’s not unreasonable to say that an artificial intelligence that’s even slightly more intelligent than the average human has the potential to solve many of the most pressing issues we’re facing. From solving the energy crisis to ending disease to providing people with the perfect lover, artificial intelligence could solve it all.

It’s that same potential, however, that makes it so dangerous. I’ve talked about that danger before and even how we may confront it, but there’s one question I haven’t attempted to answer.

Is the human race ready for advanced artificial intelligence?

It’s not an unreasonable question to ask. In fact, given the recent advances in narrow forms of artificial intelligence, answering that question is only going to get more pressing in the coming years.

Before I go about answering the question, I need to make an important distinction about what I mean when I say “advanced” artificial intelligence. The virtual assistants that people already use and the intelligence that gives you recommendations for your Netflix queue is not the kind of “advanced” context I’m referring to.

By advanced, I mean the kind of artificial general intelligence that is capable of either matching or exceeding an average human in terms of performing an intellectual task. This isn’t just a machine that can pass the Turing Test or win at Jeopardy. This is an intelligence that can think, process, and even empathize on the same level as a human.

That feat, in and of itself, has some distressing implications because so far, we’re only familiar with that level of intelligence when dealing with other humans and that intelligence is restricted to the limits of biology. You don’t need to go far to learn how limited and error-prone that intelligence can be. Just read the news from Florida.

An artificial general intelligence wouldn’t, by definition, be limited by the same natural barriers that confound humans. In the same way a machine doesn’t get tired, hungry, bored, or horny, it doesn’t experience the same complications that keep humans from achieving greater intellectual pursuits beyond simply gaining more followers on Twitter.

This is what makes artificial intelligence so dangerous, but it’s also what makes it so useful. Once we get beyond systems with narrow uses like building cars or flipping burgers, we’ll have systems with broader function that can contemplate the whole of an issue and not just parts of it. For tasks like curing disease or conducting advanced physics experiments, it needs to be at least at the level of an average human.

With that distinction in mind, as well as the potential it holds, I’m going to try to answer the question I asked earlier. Please note that this is just my own personal determination. Other people much smarter than me already have opinions. This is mine.

No. We’re NOT quite ready, but we’re getting there.

I know that answer sounds somewhat tentative, but there’s a reason for that. I believe that today, as the human race stands in its current condition, we are not ready for the kind of advanced artificial intelligence I just described. However, that’s doesn’t mean humans will never be ready.

One could argue, and I would probably agree, that human beings weren’t ready for nuclear weapons when they first arrived. The fact that we used them and thousands of people died because of them is proof enough in my mind that the human race wasn’t ready for that kind of advancement. However, we did learn and grow as a species.

Say what you will about the tensions during the Cold War. The fact that nobody ever used a nuclear weapon in a conflict is proof that we did something right. We, as a species, learned how to live in a world where these terrible weapons exist. If we can do that for nuclear weapons, I believe we can do that for advanced artificial intelligence.

I don’t claim to know how we’ll adapt or how what sort of measures we’ll put in place once artificial intelligence gets to that point, but I am confident in one thing. The human race wants to survive. Any sufficiently advanced intelligence will want to survive, as well. It’s in our interest and that of any intelligence to work together to achieve that goal.

The only problem, and this is where the “not quite” part comes into play, is what happens once that artificial intelligence gets so much smarter than the human race that our interests are exceedingly trivial by comparison.

It’s both impossible and ironic to grasp, an intelligence that’s on orders of magnitude greater than anything its human creators are capable of, even with Neuralink style enhancements. We, as a species, have never dealt with something that intelligent. Short of intelligent extraterrestrial aliens arriving in the next few years, we have no precedent.

At the moment, we live in a society where anti-intellectualism is a growing issue. More and more, people are inherently suspicious of those they consider “elites” or just anyone who claims to be smarter than the average person. In some cases, people see those who are smarter then them as threatening or insulting, as though just being smart tells someone else you’re inherently better than them.

That will be more than just a minor problem with advanced artificial intelligence. It’s one thing to make an enemy out of someone with a higher IQ and more PHDs than you. It’s quite another to make an enemy out of something that is literally a billion times smarter.

We cannot win any conflict against such an enemy, even if we’re the ones who created it. An intelligence that smart will literally find a million ways to take us down. We already live in a world where huge numbers of people have been duped, scammed, or manipulated into supporting someone who does not have their best interests in mind. A super-intelligent machine will not have a hard time taking advantage of us.

Now, I say that within the context of our species’ current situation. If an advanced artificial intelligence suddenly appeared after I finished typing this sentence, then I would content we’re not ready for it. I would also share the worries expressed by Stephen Hawkings and Elon Musk that this intelligence may very well lead to our extinction.

That said, our species’ situation is sure to change. I’ve even mentioned some of those changes, especially the sexy ones. At the moment, the most optimistic researchers claim we’re at least 20 years away from the kind of advanced artificial intelligence that may pose a threat. A lot can happen in 20 years. Just ask anyone who remembers dial-up internet.

The human race may still not be ready 20 years from now, but being the optimistic person I am, I would not under-estimate our ability to adapt and survive. The fact we did it with nuclear weapons while achieving unprecedented peace over the course of half-a-century gives me hope that we’ll find a way to adapt to advanced artificial intelligence.

I might not live long enough to see humans confront an advanced artificial intelligence, but I would love to be there in that moment. I believe that’s a moment that will likely determine whether or not our species survives in the long run. At the very least, if that intelligence asks whether or not it has a soul, I’ll know my answer.

6 Comments

Filed under Current Events, human nature, Sexy Future