Tag Archives: The Terminator

Terminator Zero: New Fate And Renewed Hope

This is a video from my YouTube channel, Jack’s World.

It’s been a long time since Terminator fans had a reason get to excited. But with the release of Terminator Zero on Netflix, the fate of this franchise has promise. The story of Sarah and John Connor is in the past. Through this series, a new future and a new war against Skynet has begun. And I couldn’t be more thrilled with the possibilities. Enjoy!

Leave a comment

Filed under futurism, television

Happy (Terminator) Judgement Day!

Today is August 29, 2024.

Officially, it’s just an ordinary day. It has no historical or legal significance of any kind.

But if you happen to be a serious movie buff, or a big fan of the Terminator franchise, it is the official canon date of Judgement Day. According to the events documented in “Terminator 2, Judgement Day,” Skynet became self-aware at 2:14 a.m., Eastern Standard Time, on August 29, 1997. Shortly afterwards, it launched a full-scale war on the whole of humanity.

While it is true that this date has fluctuated over the years due to numerous time travel anomalies and sub-par sequels, the August 29, 1997, date tends to be the most well-known and accepted. And in recent years, I think that date has become even more noteworthy and not just because it celebrates the greatest action movie of all time.

In recent years, artificial intelligence has emerged as something far greater than a standard movie trope. It is now a rapidly growing field, as well a multi-billion-dollar industry. I’ve discussed the possibilities in the past and tried to put more recent advancements like ChatGPT into a proper context. But even I have been astonished at how quickly artificial intelligence has gone from an intriguing possibility to something that’s very real.

At this very moment, anyone with an internet connection can utilize what is essentially a limited AI for various tasks in a manner that would’ve seemed like a futuristic dream less than 10 years ago. It’s very likely that in the next 10 years, AI will advance in plenty of other ways, some which we can’t possibly predict.

But these AI systems are nothing like Skynet. They’re very narrow in terms of their scope and capabilities. They’re not going to become self-aware or gain human-level intelligence. But make no mistake. At some point, we will have an AI that is as smart as an average human, if not smarter. This is not a technological feat that requires a huge breakthrough that defies or redefines physics. It’s largely an engineering and logistical challenge.

Whether it happens next year or several decades from now, it will happen. We will eventually create an AI that’s as capable and self-aware as Skynet. At that point, what happens next has been a huge point of contention and speculation.

Unless someone is stupid enough to give this future AI access to a nuclear weapons system, I don’t see it launching a full-scale nuclear war like Skynet did. Even if it wanted to, I doubt it would be given that level of access. That doesn’t mean it’ll be friendly. The first self-aware AI may very well be hostile in that it sees humanity as a threat. If it does choose to pursue conflict with us, that could result in some truly unprecedented dangers.

It’s unlikely it will manifest like the war we saw unfold in multiple Terminator movies. It’s also unlikely it’ll operate in any body that resembles Arnold Schwarzenegger. Whatever form it does ultimately take, it will still likely mark a turning point for technology and human history, as a whole.

Whether we call it Judgement Day, the technological singularity, Doomsday, or just another day that ends in Y, it’s a date we should take seriously. Because with all technology, there are risks and benefits. There are also best-case scenarios to go along with the worst case. “Terminator 2, Judgement Day” offered us a cinematic experience of how bad it can get if AI becomes hostile.

But we also shouldn’t let that obscure how good it could potentially be if utilized properly. I would even argue that, as the world continues to face unprecedented dangers and uncertainty, we need to advance AI. It might very well be our most important tool in solving critical issues having to do with climate change, medicine, ecological destruction, power generation, and surviving the eventual destruction of our planet.

As such, we should continue to proceed carefully. We should remain cautious as we journey into the future. As Sarah Conner once said, there is no fate but what we make. And when it comes to developing an AI that will eventually become self-aware, let’s make that fate a good one for everyone, be that human or machine.

Leave a comment

Filed under Artificial Intelligence, ChatGPT, technology

How The Terminator Franchise Has Declined And How To Revive It

This video is a breakdown of the sad, but not surprising decline of the Terminator franchise. After the underperformance of Terminator: Dark Fate, the state of this franchise has never been lower.

However, I do not believe that the Terminator is done completely. I believe there is still a way to revive it for a new audience and a new generation. I make that case, as well. I even offer a potential scenario. Enjoy!

Leave a comment

Filed under Jack's World, movies, YouTube

Jack’s Comic Gems: Transformers Vs. Terminator

The following is a video from my YouTube channel, Jack’s World. It’s another entry in my ongoing series/playlist, Jack’s Comic Gems. This time, I highlight a recent gem called “Transformers vs. Terminator.” Enjoy!

Leave a comment

Filed under Jack's Comic Gems, YouTube

Is The Human Race Ready For Advanced Artificial Intelligence?

1217red_f1ai

In general, whenever someone expresses concern that the human race is not ready for a certain technological advancement, it’s too late. That advancement is either already here or immanent. Say what you will about Ian Malcolm’s speech on the dangers of genetically engineered dinosaurs in “Jurassic Park.” The fact he said that after there were enough dinosaurs to fill a theme park makes his concerns somewhat moot.

That’s understandable, and even forgivable, since few people know how certain technological advances are going to manifest. I doubt the inventor of the cell phone ever could’ve imagined that his creation would be used to exchange images of peoples’ genitals. Like the inventor of the ski mask, he never could’ve known how his invention would’ve advanced over time.

For some technological advancements, though, we can’t afford to be short-sighted. Some advances aren’t just dangerous. They’re serious existential threats that, if misused, could lead to the extinction of the human race. That’s why nuclear weapons are handled with such fear and care. We’ve already come painfully close on more than one occasion to letting this remarkable technology wipe us out.

Compared to nuclear weapons, though, artificial intelligence is even more remarkable and potentially more dangerous. Nuclear weapons are just weapons. Their use is fairly narrow and their danger is pretty well-understood to anyone with a passing knowledge of history. The potential for artificial intelligence is much greater than any weapon.

It’s not unreasonable to say that an artificial intelligence that’s even slightly more intelligent than the average human has the potential to solve many of the most pressing issues we’re facing. From solving the energy crisis to ending disease to providing people with the perfect lover, artificial intelligence could solve it all.

It’s that same potential, however, that makes it so dangerous. I’ve talked about that danger before and even how we may confront it, but there’s one question I haven’t attempted to answer.

Is the human race ready for advanced artificial intelligence?

It’s not an unreasonable question to ask. In fact, given the recent advances in narrow forms of artificial intelligence, answering that question is only going to get more pressing in the coming years.

Before I go about answering the question, I need to make an important distinction about what I mean when I say “advanced” artificial intelligence. The virtual assistants that people already use and the intelligence that gives you recommendations for your Netflix queue is not the kind of “advanced” context I’m referring to.

By advanced, I mean the kind of artificial general intelligence that is capable of either matching or exceeding an average human in terms of performing an intellectual task. This isn’t just a machine that can pass the Turing Test or win at Jeopardy. This is an intelligence that can think, process, and even empathize on the same level as a human.

That feat, in and of itself, has some distressing implications because so far, we’re only familiar with that level of intelligence when dealing with other humans and that intelligence is restricted to the limits of biology. You don’t need to go far to learn how limited and error-prone that intelligence can be. Just read the news from Florida.

An artificial general intelligence wouldn’t, by definition, be limited by the same natural barriers that confound humans. In the same way a machine doesn’t get tired, hungry, bored, or horny, it doesn’t experience the same complications that keep humans from achieving greater intellectual pursuits beyond simply gaining more followers on Twitter.

This is what makes artificial intelligence so dangerous, but it’s also what makes it so useful. Once we get beyond systems with narrow uses like building cars or flipping burgers, we’ll have systems with broader function that can contemplate the whole of an issue and not just parts of it. For tasks like curing disease or conducting advanced physics experiments, it needs to be at least at the level of an average human.

With that distinction in mind, as well as the potential it holds, I’m going to try to answer the question I asked earlier. Please note that this is just my own personal determination. Other people much smarter than me already have opinions. This is mine.

No. We’re NOT quite ready, but we’re getting there.

I know that answer sounds somewhat tentative, but there’s a reason for that. I believe that today, as the human race stands in its current condition, we are not ready for the kind of advanced artificial intelligence I just described. However, that’s doesn’t mean humans will never be ready.

One could argue, and I would probably agree, that human beings weren’t ready for nuclear weapons when they first arrived. The fact that we used them and thousands of people died because of them is proof enough in my mind that the human race wasn’t ready for that kind of advancement. However, we did learn and grow as a species.

Say what you will about the tensions during the Cold War. The fact that nobody ever used a nuclear weapon in a conflict is proof that we did something right. We, as a species, learned how to live in a world where these terrible weapons exist. If we can do that for nuclear weapons, I believe we can do that for advanced artificial intelligence.

I don’t claim to know how we’ll adapt or how what sort of measures we’ll put in place once artificial intelligence gets to that point, but I am confident in one thing. The human race wants to survive. Any sufficiently advanced intelligence will want to survive, as well. It’s in our interest and that of any intelligence to work together to achieve that goal.

The only problem, and this is where the “not quite” part comes into play, is what happens once that artificial intelligence gets so much smarter than the human race that our interests are exceedingly trivial by comparison.

It’s both impossible and ironic to grasp, an intelligence that’s on orders of magnitude greater than anything its human creators are capable of, even with Neuralink style enhancements. We, as a species, have never dealt with something that intelligent. Short of intelligent extraterrestrial aliens arriving in the next few years, we have no precedent.

At the moment, we live in a society where anti-intellectualism is a growing issue. More and more, people are inherently suspicious of those they consider “elites” or just anyone who claims to be smarter than the average person. In some cases, people see those who are smarter then them as threatening or insulting, as though just being smart tells someone else you’re inherently better than them.

That will be more than just a minor problem with advanced artificial intelligence. It’s one thing to make an enemy out of someone with a higher IQ and more PHDs than you. It’s quite another to make an enemy out of something that is literally a billion times smarter.

We cannot win any conflict against such an enemy, even if we’re the ones who created it. An intelligence that smart will literally find a million ways to take us down. We already live in a world where huge numbers of people have been duped, scammed, or manipulated into supporting someone who does not have their best interests in mind. A super-intelligent machine will not have a hard time taking advantage of us.

Now, I say that within the context of our species’ current situation. If an advanced artificial intelligence suddenly appeared after I finished typing this sentence, then I would content we’re not ready for it. I would also share the worries expressed by Stephen Hawkings and Elon Musk that this intelligence may very well lead to our extinction.

That said, our species’ situation is sure to change. I’ve even mentioned some of those changes, especially the sexy ones. At the moment, the most optimistic researchers claim we’re at least 20 years away from the kind of advanced artificial intelligence that may pose a threat. A lot can happen in 20 years. Just ask anyone who remembers dial-up internet.

The human race may still not be ready 20 years from now, but being the optimistic person I am, I would not under-estimate our ability to adapt and survive. The fact we did it with nuclear weapons while achieving unprecedented peace over the course of half-a-century gives me hope that we’ll find a way to adapt to advanced artificial intelligence.

I might not live long enough to see humans confront an advanced artificial intelligence, but I would love to be there in that moment. I believe that’s a moment that will likely determine whether or not our species survives in the long run. At the very least, if that intelligence asks whether or not it has a soul, I’ll know my answer.

8 Comments

Filed under Current Events, human nature, Sexy Future