Tag Archives: Advanced AI

Big Tech, AI Research, And Ethics Concerns: Why We Should All Worry

In general, I root for technology and technological progress. Overall, I believe it has been a net benefit for humanity. It’s one of the major reasons why we’ve made so much progress as a global society in the past 100 years.

I’ve sung the praises of technology in the past, speculated on its potential, and highlighted individuals who have used it to save millions of lives. For the most part, I focus on the positives and encourage other people to have a less pessimistic view of technology and the change it invites.

That said, there is another side to that coin and I try not to ignore it. Like anything, technology has a dark side. It can be used to harm just as much as it can be used to hurt, if not more so. You could argue that we couldn’t have killed each other at such a staggering rate in World War II without technology.

It’s not hyperbole to say that certain technology could be the death of us all. In fact, we’ve come distressingly close to destroying ourselves before, namely with nuclear weapons. There’s no question that kind of technology is dangerous.

However, artificial intelligence could be far more dangerous than any nuclear bomb. I’ve talked about it before and I’ll likely bring it up again. This technology just has too much potential, for better and for worse.

That’s why when people who are actually researching it have concerns, we should take notice. One such individual spoke out recently, specifically someone who worked for Google, an organization with deep pockets and a keen interest in Artificial Intelligence.

According to a report from the Associated Press, a scholar named Timnit Gebru expressed serious concerns about Google’s AI research, specifically in how their operating ethics. For a company as big and powerful as Google, that’s not a trivial comment. This is what she had to say.

AP News: Google AI researcher’s exit sparks ethics, bias concerns

Prominent artificial intelligence scholar Timnit Gebru helped improve Google’s public image as a company that elevates Black computer scientists and questions harmful uses of AI technology.

But internally, Gebru, a leader in the field of AI ethics, was not shy about voicing doubts about those commitments — until she was pushed out of the company this week in a dispute over a research paper examining the societal dangers of an emerging branch of AI.

Gebru announced on Twitter she was fired. Google told employees she resigned. More than 1,200 Google employees have signed on to an open letter calling the incident “unprecedented research censorship” and faulting the company for racism and defensiveness.

The furor over Gebru’s abrupt departure is the latest incident raising questions about whether Google has strayed so far away from its original “Don’t Be Evil” motto that the company now routinely ousts employees who dare to challenge management. The exit of Gebru, who is Black, also raised further doubts about diversity and inclusion at a company where Black women account for just 1.6% of the workforce.

And it’s exposed concerns beyond Google about whether showy efforts at ethical AI — ranging from a White House executive order this week to ethics review teams set up throughout the tech industry — are of little use when their conclusions might threaten profits or national interests.

I bolded that last sentence because I think it’s the most relevant. It’s also the greatest cause for concern. I suspect Ms. Gebru is more concerned than most because the implications are clear.

When a tool as powerful as advanced AI is developed, who gets to determine how it’s used? Who gets to program the ethical framework by which it operates? Who gets to decide how the benefits are conferred and the harms are reduced?

Moreover, how do you even go about programming an AI with the right kind of ethics?

That’s a very relative question and one we can’t avoid if we’re going to keep developing this technology. I’ve tried to answer it, but I’m hardly an expert. Ms. Gebru was definitely in a better position than me or most other people with a passing interest in this field.

Then, she gets fired and starts expressing concerns publicly. The fact that she can and Google isn’t facing much in terms of repercussions should be concerning. It may also be a sign of the larger challenges we’re facing.

Google, like many other organizations researching advanced AI, is a profit-seeking tech company. They’re not some utopian technocrats. They’re a business who is obligated to make their investors happy. Advanced AI will help them do that, but what kind of consequences will that invite?

If profit is the primary motivation of an advanced AI, then what happens when it encounters a situation where profit comes at the cost of lives? There are already human-run companies that make those decision and people die because of them. An advanced AI will only make it many times worse.

Once an artificial intelligence system is as smart as a human, it’s going to be capable in ways we don’t expect and can’t control. If it’s ethics and goals aren’t aligned with us, then what’s to stop it from wiping humanity out in the name of profit?

It’s a distressing thought. It’s probably a thought that has crossed Ms. Gebru’s mind more than once. She may know how close or far we are to that point, but the fact that this is already a conflict should worry us all.

We’ve already become so numb to the greed and excesses of big business. Tech companies may conduct themselves as this team of future-building visionaries intent on making the world a better place, but the profit motive is still there. Like it or not, profit is still a hell of a motive.

Eventually, artificial intelligence will get to a point where it will either adopt our ethics or choose to formulate its own, which may or may not align with ours. When that happens, no amount of profit may be worth the risk.

Now, we’re still a ways off from an artificial intelligence system on that level, but it’s still quite possible that there are people alive today who will grow up to see it. When that time comes, we need to be damn sure these systems have solid ethical frameworks in place.

If they don’t, we really don’t stand a chance. We’re a society that still kills each other over what we think happens when we die without seeing the irony. Even a marginally advanced AI will have no issues wiping us out if we make doing so profitable.

Leave a comment

Filed under Artificial Intelligence, technology

Deep Fake Technology Can Now Make Tom Cruise Iron Man: Signs And Implications

Certain technology advances slowly and steadily. It’s why we’re still waiting for a cure for the common cold. Other technological breakthroughs advance at such a fast rate it’s hard to keep up with. Anyone who doesn’t regularly upgrade their cell phone understands that.

That brings me to the technology of deep fakes. I’ve talked about them before and the implications this technology has for the entertainment industry. Well, I’m here to report that this technology might be advancing faster than I thought.

Recently, a new deep fake video hit the web. It’s nothing overly nefarious. It’s actually a play on a real story from the mid-2000s. Before Robert Downey Jr. was cast as Tony Stark in the first “Iron Man” movie, Tom Cruise was in the running for that role.

He has since claimed he was never close to getting that role, but it’s still an interesting idea. For most Marvel fans, it’s hard to imagine anyone other than RDJ donning that now-iconic armor. However, there’s no denying that Tom Cruise being Iron Man would’ve changed a franchise, as well as cinematic history.

Well, thanks to deep fake technology, we don’t have to imagine anymore. We can now see for ourselves what it would look like if Tom Cruise had been cast as Iron Man in the Marvel Cinematic Universe. See for yourself.

Watching this, I have to say it was more than a little jarring. It’s not just that seeing someone other than RDJ as Iron Man is strange. I was genuinely impressed by how real it looked.

Yes, it did become a bit obvious at times that there was some digital trickery at work. I’ve seen enough Tom Cruise movies to know what he looks like. I could tell that the body just did not match the iconic face at times.

However, I’m still impressed at just how seamless it appeared, especially when he was in the Iron Man costume. It really did look like Cruise had embraced the role as much as RDJ had. Even though the voice had to come courtesy of a skilled voice actor, the graphics technology is definitely on pace to cross the uncanny valley sooner rather than later.

The implications here are profound. If the technology is already at this point, then it’s a given that Hollywood and propaganda pushers will start embracing it sooner. For Hollywood, who is reeling in wake of a historic pandemic, they may have more incentives to embrace it than most.

Beyond actors and actresses who get “cancelled” for their behavior, it may start as a cost cutting measure. If it costs too much to put Hugh Jackman or Tom Cruise on a movie set, why not just put a cheaper actor in their place and just deep fake the more iconic figure over it? If the technology is that good and nobody can tell the difference, it almost makes too much sense.

It may get to a point where nobody outside the studio knows whether the figure we see on screen was actually “there” to give that moment life. They may just be a digital scan mixed with digitally audio, which is also advancing.

This has even larger implications with propaganda. If the technology gets to a point where we can make any public figure say or do anything we want, no matter how deplorable, then how can we trust any media image? Would “cancel culture” even be feasible at that point? If people can just claim an embarrassing moment was a deep fake, how would we know?

It’s a distressing thought, but it’s something we’ll have to account for. We may end up having to contemplate it sooner than we thought. This technology can already show us a world in which Tom Cruise was cast as Iron Man. What other worlds will it reveal?

We’ll find out soon enough.

Leave a comment

Filed under Artificial Intelligence, futurism, media issues, superhero comics, superhero movies, technology, YouTube

Artificial Intelligence Is Learning Law: Is Government Next?

It’s inevitable. As technology advances, certain industries are going to become obsolete. That’s why the horse-and-buggy industry is incredibly limited. It’s also why companies don’t make typewriters or LaserDisk movies anymore. Once better tech becomes available, an industry either adapts or disappears. Just ask Blockbuster.

Sometimes, it’s obvious that an industry is becoming obsolete. Again, just ask Blockbuster. As soon as something better, easier, and more convenient comes along, it’s only a matter of time before it takes over. However, it’s when things aren’t quite as obvious where more dramatic changes occur.

In terms of dramatic change, few things have the potential to generate more than artificial intelligence. I’ve highlighted that many times before, but a lot of that potential depends on advances that haven’t happened yet. They’re still likely to happen at some point, which may or may not be in my lifetime. They’re just not there yet.

That said, AI doesn’t have to become advanced on the level of Skynet or Hal 9000 to impact and/or disrupt major industries. The AI technology we have now is already having an impact. It may only be a narrow form of AI, which is AI that’s focused on performing a specific task, like playing chess. Its potential is still immense and some fields are feeling it more than others.

One industry that might feel it first is law. Now, at the risk of inspiring one too many lawyer jokes, I’m going to try and keep things general here. I’m also going to try and fit in some personal experience. I know some lawyers personally. I’ve been in law offices and I’ve seen how they work. You don’t have to be that much a visionary to understand how AI could change this industry entirely.

Recently, TechNews did a story on how artificial intelligence is learning basic legal operations and learning it quite well. Given the massive amounts of data and technicalities included in American law, a narrow AI is ideally suited to handle such tasks. However, I don’t think the piece fully grasps the implications.

TechNews: Lawyers Beware: Artificial Intelligence Is Learning Law – And Doing Frighteningly Well

AI or artificial intelligence is starting to find its footing in the legal field. The world is now on the brink of revolution in legal profession spearheaded with the extensive use of AI in the entire industry, specifically by the in-house lawyers.

Just like how email greatly changed the way people conduct their business on a daily basis, AI is also expected to become an ever-present force and an invaluable assistant to almost all lawyers.

But the million-dollar question now is, what does the future look like for AI as far as the legal industry is concerned? A much bigger question is, will AI soon replace real life lawyers?

These are not unreasonable questions. What will happen to the current legal industry if much of the legal grunt-work can be handled by an AI? What will happen to the industry when it’s no longer necessary to have a huge team of overpaid lawyers to conduct competent legal operations?

As someone who has been in his share of law offices, I can make a few educated guesses. I can easily imagine firms shrinking their office space, but expanding their operations. Most of the legal offices I’ve gone to dedicate 80 percent of their office space to storing documents and secure research material. Very little is left or necessary for the actual people doing the work.

The recent pandemic has only revealed that plenty of this work can be done form home or remotely. Some legal proceedings are even unfolding through Zoom calls, albeit with mixed results. It’s a step in that it undermines and disrupts the traditional model for handling the law. It also raises a much larger question that the TechNews article didn’t ask.

Once AI learns the law, then is learning government next?

It’s a natural progression. Governments make and administer laws. An AI that specializes in the law would also have to learn government, as well. A narrow AI might be able to process the general bureaucracy of a government, but what happens when those systems become more advanced?

I’m not just talking about a scenario where an AI becomes the government, which I’ve already speculated on. An AI that has perfect expertise in both law and government operations could have many less obvious effects. Inefficiencies that often go unnoticed in a bureaucracy are suddenly harder to overlook. Inconsistencies that rarely get fixed, due to that bureaucracy, can finally be remedied.

In theory, a sufficiently advanced AI, which need not be as intelligent as a human, could do more than just document legal and government proceedings. It could formulate new laws and policies on its own. Some may seem outrageous from a basic non-lawyer human perspective, but make perfect sense within a functioning legal system or government.

It may still seem like just another tool for lawyers to stay organized, but I think it could be more than that. If an AI makes both legal and government systems more efficient, then what will that mean for those in government? Would politicians be better able to implement their agenda if they have tools like AI at their disposal? Would that necessarily be a good thing?

This is where things get both tricky and political. No matter how confident you are in your political persuasions, the party you favor will not always be in power.

It may seem like politics is trending a certain way, but those trends change quickly. People who think their party is strong now can’t imagine a time when they’ll lose that strength. It happens regularly in any democracy.

Like it or not, your party will one day be out of power. When that happens, do you want the other party having a more efficient means of implementing their policies?

I’m sure everyone’s answer to that question will vary. What no one is certain of is how we’ll keep up with ever-improving AI systems, regardless of what industry they’re in. It’s one thing for a system to make it easier to stream movies or keep track of groceries. It’s quite another when it becomes intimately involved with our laws and our government.

The TechNews article expressed some concern, but only with respect to how it affects the current law industry. I believe AI, even if it’s focused only on law, will have a far larger impact. That’s not to say that AI will render law firms and governments obsolete.

If ever there was one domain in which foresight is critical, it’s this. Some industries can and should become obsolete. Others, like how we govern our society, need a more careful approach. We simply cannot afford our laws and our government to end up like Blockbuster.

Leave a comment

Filed under Artificial Intelligence, Current Events, human nature, Neuralink, politics, technology

Jack’s World: Why Neuralink Might Be The Most Important Venture Of All Time

The following is a video for my YouTube channel, Jack’s World. You may recognize the title from an article I wrote years ago in the before times when pandemics were still the subject of bad sci-fi movies. I miss those times too.

The news surrounding Neuralink is still of great interest to me. I still think it’s one of the most important technological advancements of the century. This video simply offers another general overview of why this technology is so important. Enjoy!

Leave a comment

Filed under biotechnology, futurism, health, human nature, Jack's World, Neuralink, technology, YouTube

Jack’s World: A Balanced Outlook On Artificial Intelligence

The following is a video I posted on my YouTube channel, Jack’s World. It’s my attempt to offer some perspective on artificial intelligence, a topic I’ve covered many times before. I hope you find it informative and engaging. Enjoy!

Leave a comment

Filed under Artificial Intelligence, futurism, Jack's World, technology, YouTube

How Do We Regulate Artificial Intelligence? Seriously, How?

In general, I don’t pay much attention to doomsayers who claim the end is near and we should all cower, tremble, and give them our credit card number. Don’t get me wrong. I still believe there are serious existential threats facing humanity today. Some are worth taking seriously and dedicating resources to addressing. Others are not. Some simply require a more balanced perspective.

There’s a long list of failed apocalyptic predictions. The fact we’re surviving and thriving by most measures shows just how resilient, adaptable, and capable humanity is. There are some threats that I believe humanity will eventually resolve, thanks largely to our accelerating progress in science, technology, and logistics.

Others, however, have me more concerned. While some are more immediate than others, one in particular continues to confound me, as well as some of the smartest people in the world. It involves artificial intelligence, an emerging technology that’s as promising as it is unpredictable. Given the complexity of this technology, it’s difficult to explain in totality, but it can be best summed up by one simple question.

How do you regulate artificial intelligence?

That’s not a rhetorical question. It’s not a thought experiment either. It’s a serious, honest question that people far smarter and far more capable than me are just starting to ask.

Elon Musk is one of them. Very recently, he called for more regulation on artificial intelligence. That, alone, should be both telling and worrying. This man is a billionaire. Usually, billionaires are more inclined advocate removing regulations. Whenever they make an exception, that’s a sign they know it’s serious.

Even though Musk is one of the top advocates for solving big problems with technology, he still has concerns about the problems associated with artificial intelligence. In AI circles, it’s often called the control problem. It’s not a very creative name, but it gets the point across.

How do you control something that is potentially as smart, if not smarter than a human?

How do you manage something that thinks, adapts, and evolves faster than any machine or living thing?

How do you regulate an artificial intelligence that was built by humans, but isn’t at all human?

These are all difficult questions to contemplate, let alone legislate. Even Musk doesn’t provide specifics. Chances are he doesn’t know any more than the rest of the non-billionaire population. That’s a problem because if we’re going to try and regulate this technology, we need to understand it. On top of that, politicians and lawmakers have a long and embarrassing history of failing to understand technology.

However, this isn’t just about writing laws that protect citizens from being exploited by tech companies. Artificial intelligence, especially the kind that exceeds human intelligence, has capabilities that go beyond sending text messages from bathroom stalls. If handled improperly, it wouldn’t just be an existential threat. It could destroy humanity in ways we literally cannot contemplate.

Now, I try to be an optimist in most things involving emerging technology. Humanity has found a way to manage dangerous technology before, namely with nuclear weapons. However, artificial intelligence is a different beast entirely. Regulating it isn’t as easy as simply controlling the materials that make it. The very concept of regulating this technology lacks precedent.

The closest we have to date is Isaac Asimov’s famous three laws of robotics, which were introduced in 1942. Asimov was a brilliant writer and very ahead of his time on some concepts, but this is one issue where we need more than just three simple tenants. We need to think bigger and bolder. If we don’t, then an advanced artificial intelligence will quickly leave us behind.

After that, it won’t matter what kind of regulations we try to pass. It’ll be smart enough to circumvent them. That doesn’t mean humanity is doomed at that point, but we’ll be distressingly vulnerable. I know it’s in our nature to procrastinate on things we don’t see as vital, but if ever there was an issue to make an exception, this is it.

1 Comment

Filed under Artificial Intelligence, biotechnology, futurism, philosophy, technology, Thought Experiment

Guest Post: 5 Highly Recommended Books on Machine Learning (ML) and Artificial Intelligence (AI)

now-ae452835-2617-420e-b8c3-58b5c4668645-1210-680

The following is an article submitted by Harsh Arora, a freelance blogger and writer who shares a similar interest in artificial intelligence, futurism, and the future of the human race. To Mr. Arora, I sincerely thank you for this submission and your continued support.


We would first of all like to welcome all types of readers to our blog – newbies who are just interested about these buzzwords and also experts on the subjects who would like to extend their existing knowledge on the matter. Having established that, it is also imperative to not only define these two concepts (AI and ML) but also to differentiate between them. Although these terms are used synonymously, they are in fact different from one another. AI is the broader level concept where we feed the machine with data and then expect it to take decisions based on that data. ML on the other hand is a subset and application of AI where we feed machines with data and allow them to learn on their own.

Following are the books we recommend for you to learn more about them:

Machine Learning for Absolute Beginners: A Plain English Introduction – Oliver Theobald

It’s easy to see which part of our reader base this particular book is targeted towards. You may be a complete outsider to the world of ML and still be able to understand the granular technical aspects of it through this book. Oliver Theobald assumes no background of programming or coding on the behalf of the reader and allows you to learn right from scratch. It is not only the perfect book for newbies but also experts in the field because it tries to explain the basic concepts in a very coherent and distinct manner. This books not only helps you learn about the concepts of ML but also allows you to unlearn and then relearn them, something is really important for such a subject matter.

The Hundred-Page Machine Learning Book – Andrew Burkov

This is once again a book that will interest not only beginners but also experts in the field. Andrew has really been able to simplify the concepts of ML into basic and easily comprehensible set of cliff notes. With just 100 pages at his disposal, he has really captured the over-arching essence of ML. Though, of course it is not a deep dive into the subject matter like some of our other recommendations, it is however a wonderful summary of it. It is perfect for people who want to understand this technology and its implementations and implications in the real world.

Artificial Intelligence: A Modern Approach (3rd Edition) – Stuart Russell & Peter Norvig

Stuart Russel is one of the sharpest minds working in the field of AI and is a faculty at University of Berkeley. Additionally, he is an Oxford graduate and also holds a PhD from Stanford. In his third edition of the book, Stuart decided to collaborate with Peter Norvig who is currently the R&D Director at Google. Collaboratively, they have created a well-researched and well-defined approach towards understanding modern AI. This book is perfect for students of under-graduate or graduate level courses or even for laymen with the basic understanding of the fundamentals of AI. This long-anticipated edition of its best-seller predecessors offers the most up-to-date and comprehensive understanding of the theory and practice of artificial intelligence.

Machine Learning – Tom M. Mitchell

This is a classic book in which the author has covered the techniques and concepts of the numerous fields and unified them to provide in depth view of ML. Some of the subjects covered include re-enforcement learning, inductive logic programming and genetic algorithms. Tom has tried to simplify these complicated concepts through a clear and explanatory way of writing. At the same time, he has used tools such as case studies and relevant examples to provide a comprehensive overview. Lastly, there is no knowledge of the complex ideas that he has assumed on the part of the reader.

Superintelligence – Nick Bostrom

If you are familiar with the work of Mr. Nick Bostrom, you know you are in for a treat with this book. He takes a different approach to not only explain the artificial intelligence but also the effects it has on our existence. Nick believes that self-aware machines are potentially a bigger threat to humanity than climate change. He has authored over 200 books and his writing forces you to take him seriously in this seemingly sci-fi piece of literature. He helps us understand how the most intelligent form of life i.e. now humans have governed the fate of existence since the dawn. However, with a species (sort of) that has the potential to be smarter than us, what chance is there that they won’t dominate us?

Artificial Intelligence for Humans (Fundamental Algorithms: 1) – Jeff Heaton

If you are planning to build a career in artificial intelligence, this should be your starting off point and you should read it from cover to cover. Jeff Heaton cover several topics in depth such as clustering, distance metrics, dimensionality, linear regression, error calculation and hill climbing. The book takes you through the actual mathematical calculations that you can compute yourself and also see the real-world applications of. However, to build a career in this industry, you must not only understand the basic principals of AI but also of algebra and computer programming. This book will build on those concepts through various computer languages such as C, Java, C#, R and Python.

These books are some of the best in the market and will be perfect for people of all knowledge levels of AI and ML. Given that the industrial revolution 4.0 is upon us and almost all technology is slowly being integrated with it, it is suggested that we all learn more about it. However, it is completely up to you to form opinions about whether or not this technology will be harmful to humans in the long run. Additionally, we also suggest you read up on a few other technologies that are prevalent in this 4.0 era such as IOT, Blockchain and Cloud Computing.

About me: Harsh Arora is a proud father of four rescued dogs and a leopard gecko. Besides being a full-time dog father, he is a freelance content writer/blogger and a massage expert who is skilled in using the best massage gun.

1 Comment

Filed under Artificial Intelligence, biotechnology, futurism, technology

Killer Robots, Drone Warfare, And How Artificial Intelligence Might Impact Both

920x515_charlene

On November 5, 2001, the history of warfare changed forever. On that date, an unmanned Predator drone armed with hellfire missiles killed Mohammed Atef, a known Al-Qaida military chief and the son-in-law to Osama Bin Laden. From a purely strategic standpoint, this was significant in that it proved the utility of a new kind of weapon system. In terms of the bigger picture, it marked the start of a new kind of warfare.

If the whole of human history has taught us anything, it’s that the course of that history changes when societies find new and devastating ways to wage war. In ancient times, to wage war, you needed to invest time and resources to train skilled warriors. That limited the scope and scale of war, although some did make the most of it.

Then, firearms came along and suddenly, you didn’t need a special warrior class. You just needed to give someone a gun, teach them how to use it, and organize them so that they could shoot in a unit. That raised both the killing power and the devastating scale of war. The rise of aircraft and bombers only compounded that.

In the 20th century, warfare became so advanced and so destructive that the large-scale wars of the past just aren’t feasible anymore. With the advent of nuclear weapons, the potential dangers of such a war are so great that no spoils are worth it anymore. In the past, I’ve even noted that the devastating power of nuclear weapons have had a positive impact on the world, albeit for distressing reasons.

Now, drone warfare has added a new complication. Today, drone strikes are such a common tactic that it barely makes the news. The only time they are noteworthy is when one of those strikes incurs heavy civilian casualties. It has also sparked serious legal questions when the targets of these strikes are American citizens. While these events are both tragic and distressing, there’s no going back.

Like gunpowder before it, the genie is out of the bottle. Warfare has evolved and will never be the same. If anything, the rise of combat drones will only accelerate the pace of change with respect to warfare. Like any weapon before it, some of that change will be negative, as civilian casualties often prove. However, there also potential benefits that could change more than just warfare.

Those benefits aren’t limited to keeping keep soldiers out of combat zones. From a cost standpoint, drones are significantly cheaper. A single manned F-22 Raptor costs approximately $150 million while a single combat drone costs about $16 million. That makes drones 15 times cheaper and you don’t need to be a combat ace to fly one.

However, those are just logistical benefits. It’s the potential that drones have in conjunction with advanced artificial intelligence that could make them every bit as influential as nuclear weapons. Make no mistake. There’s plenty of danger in that potential. There always is with advanced AI. I’ve even talked about some of those risks. Anyone who has seen a single “Terminator” movie understands those risks.

When it comes to warfare, though, risk tolerance tends to be more complicated than anything you see in the movies. The risks of AI and combat drones have already sparked concerns about killer robots in the military. As real as those risks are, there’s another side to that coin that rarely gets discussed.

Think back to any story involving a drone strike that killed civilians. There are plenty of incidents to reference. Those drones didn’t act on orders from Skynet. They were ordered by human military personnel, attempting to make tactical decision on whatever intelligence they had available at the time. The drones may have done the killing, but a human being gave the order.

To the credit of these highly trained men and women in the military, they’re still flawed humans at the end of the day. No matter how ethically they conduct themselves, they’re ability to assess, process, and judge a situation is limited. When those judgments have lives on the line, both the stakes and the burdens are immense.

Once more advanced artificial intelligence enters the picture, the dynamics for drone warfare changes considerably. This isn’t pure speculation. The United States Military has gone on record saying they’re looking for ways to integrate advanced AI into combat drones. While they stopped short of confirming they’re working on their own version of Skynet, the effort to merge AI and combat drones is underway.

In an overly-simplistic way, they basically confirmed they’re working on killer robots. They may not look like the Terminator or Ultron, but their function is similar. They’re programmed with a task and that task may or may not involve killing an enemy combatant. At some point, a combat drone is going to kill another human being purely based on AI.

That assumes it hasn’t already happened. It’s no secret that the United States Military maintains shadowy weapons programs that are often decades ahead of their time. Even if it hasn’t happened yet, it’s only a matter of time. Once an autonomous drone kills another human being, we’ll have officially entered another new era of warfare.

In this era, there are no human pilots directing combat drones from afar. There’s no human being pulling the trigger whenever a drone launches its lethal payload into a combat situation. The drones act on their own accord. They assess all the intel they have on hand, process it at speeds far beyond that of any human, and render decisions in an instant.

It sounds scary and it certainly is. Plenty of popular media, as well as respected public figures, paint a terrifying picture of killer robots killing without remorse or concern. However, those worst-case-scenarios overlook both the strategic and practical aspect of this technology.

In theory, a combat drone with sufficiently advanced artificial intelligence will be more effective than any human pilot could ever be in a military aircraft. It could fly better, carrying out maneuvers that would strain or outright kill even the most durable pilots. It could react better under stressful circumstances. It could even render better judgments that save more lives.

Imagine, for a moment, a combat drone with systems and abilities so refined that no human pilot or officer could hope to match it. This drone could fly into a war zone, analyze a situation, zero in on a target, and attack with such precision that there’s little to no collateral damage.

If it wanted to take a single person out, it could simply fire a high-powered laser that hits them right in the brain stem.

If it wants to take out someone hiding in a bunker, it could utilize a smart bullet or a rail gun that penetrates every level of shielding and impacts only a limited area.

If it wants to take out something bigger, it could coordinate with other drones to hit with traditional missiles in such a way that it had no hope of defending itself.

Granted, drones this advanced probably won’t be available on the outset. Every bit of new technology goes through a learning curve. Just look at the first firearms and combat planes for proof of that. It takes time, refinement, and incentive to make a weapons system work. Even before it’s perfected, it’ll still have an impact.

At the moment, the incentives are definitely there. Today, the general public has a very low tolerance for casualties on both sides of a conflict. The total casualties of the second Iraq War currently sit at 4,809 coalition forces and 150,000 Iraqis. While that’s only a fraction of the casualties suffered in the Vietnam War, most people still deem those losses unacceptable.

It’s no longer feasible, strategically or ethically, to just blow up an enemy and lay waste to the land around them. Neither politics nor logistics will allow it. In an era where terrorism and renegade militias pose the greatest threat, intelligence and precision matter. Human brains and muscle just won’t cut it in that environment. Combat drones, if properly refined, can do the job.

Please note that’s a big and critical if. Like nuclear weapons, this a technology that nobody in any country can afford to misuse. In the event that a combat drone AI develops into something akin to Skynet or Ultron, then the amount of death and destruction it could bring is incalculable. These systems are already designed to kill. Advanced AI will just make them better at killing than any human will ever be.

It’s a worst-case scenario, but one we’ve managed to avoid with nuclear weapons. With advanced combat drones, the benefits might be even greater than no large-scale wars on the level of Word War II. In a world where advanced combat drones keep terrorists and militias from ever becoming too big a threat, the potential benefits could be unprecedented.

Human beings have been waging bloody, brutal wars for their entire history. Nuclear weapons may have made the cost of large wars too high, but combat drones powered by AI may finally make it obsolete.

3 Comments

Filed under Artificial Intelligence, Current Events, futurism, technology

Finding True Love And Living Happily Ever After According To Isaac Arthur

photographer-finds-couple-she-took-proposal-pics-of

I’ve talked quite a bit about the future of love on this site. I’ve explored the impact of artificial intelligence, radical life extension, and human enhancement on the dynamics of pursuing love and fostering romance. I don’t claim to be the best when it comes to contemplating these topics. I’m humble enough to admit I’m not even in the top 10.

As such, I freely admit there are people who discuss this issue in far greater detail with far greater production values than I’ll ever manage. Just recently, I stumbled across a YouTube channel by someone who makes contemplating the future a relaxing, engaging, and downright pleasant experience.

The name of that channel is Isaac Arthur. If you, or anyone you know, has even a passing interest on future technology, sci-fi concepts, or space exploration, I highly recommend you check out this channel. It covers a wide range of topics from colonizing space to future sources of energy to what kind of pets we may have in the future.

Naturally, a video he made about finding love using future technology is one of my favorite. It only came out earlier this year, but it perfectly breaks down how love, romance, marriage, and family may unfold in a future dominated by artificial intelligence and radical life extension. Mr. Arthur does in one video what I probably couldn’t do in a hundred articles. For that, I thank him.

Leave a comment

Filed under Artificial Intelligence, futurism, romance, sex in society, sexuality, Sexy Future, technology

Neuralink Event: Updates, Insights, And (Big) Implications

finalmente-sapremo-neuralink-progetto-segreto-elon-musk-v3-388343

It’s been a while since I’ve talked about Neuralink. There’s a good reason for that. Even though Elon Musk announced the formation of the company back in 2017, there hasn’t been much news, aside from a few cryptic teases. That didn’t stop me from proclaiming it be one of the most important venture of all time and I still stand by that proclamation.

Finally, on July 16, 2019, we got some news from Neuralink that attracted major press coverage. Musk, in an event that was live-streamed, offered some tantalizing details on the state of the company’s research into advanced brain implants. He didn’t announce that they’d perfected it. However, he did make clear that this is not some crazy idea he just scribbled on a cocktail napkin.

The presentation was lengthy and I encourage everyone to check it out. I’m nowhere near as smart, articulate, or successful as Elon Musk. Very few individuals on this planet are. If for no other reason, watch this event to affirm that he’s serious about merging human brains and machines.

If you don’t have time to watch the whole thing, here a few key points:

  • The first use of this technology will be to treat brain disorders
  • The company has over $150 million and almost 100 employees
  • It has made significant strides in crafting advanced electrodes that can be implanted in a human brain
  • The current prototype utilizes chips that can process brain signals
  • The prototypes have proven functional with rats and monkeys

These insights are intriguing, but they’re not going to change the world overnight. It’s not like we’ll be able to order our own brain implants from Amazon by the end of the year. This technology is still in its infancy. We’ve only recently developed computer hardware that can communicate with brain cells. We’re a long way off from functioning on the level of the Borg.

What Musk laid out wasn’t just a vision. He offered specifics, citing where we are with the technology and who will initially benefit. It makes sense for those with brain disorders to act as the first beneficiaries. Aside from the difficulty of convincing most people to have something put into their brains, these implants could be invaluable at helping quadriplegics improve their lives.

It’s not an impossible feat, having brains interact directly with machines. It has been successfully done with monkeys. It’s just a matter of testing, refinement, and improvement. Like cell phones and Lasik surgery, which I’ve had done, the technology will improve once it has a foundation to build on.

Now, we got a glimpse of that foundation and there’s plenty of reasons for excitement. While nobody can predict the future, especially not as well as someone like Elon Musk, there are some major implications for the near and distant future.

Just controlling a computer with your brain is not the endgame of this technology. Musk stated clearly in the event that the ultimate goal is to create an intimate, symbiotic relationship between humans and advanced artificial intelligence. He sees it as a necessary relationship if we’re to minimize the existential threat posed by AI.

Before we get to that goal, though, it’s almost a given that this technology will find other uses and markets. One market that wasn’t mentioned in the presentation, but could prove very lucrative, is gaming.

As much as video game technology has advanced since the early days of Nintendo, the controls haven’t evolved much. We still need a keyboard or a controller to interact with the system. As someone whose reflexes were rarely fast enough while playing Mike Tyson’s Punch Out, I can appreciate those limitations more than most.

Imagine an interface where moving a character or a cursor required only your thoughts. Suddenly, you’re no longer restricted to button sequences and analog sticks. The controls only limited by your brain’s ability to give the necessary commands. Whether you’re playing an old Mario game or Grand Theft Auto V, you guide everything along with your thoughts.

Considering the gaming industry is a multi-billion dollar industry, the incentives for innovation are strong. If a brain interface offers novelty or advantages for gaming, then Neuralink is in a position to reap the benefits.

Those same benefits extend beyond the gaming industry. While it may take a while for an implant to process the complex controls of a video game, it might not take as long to create one with the ability to give wielders more direct control of their smartphone. Some may recoil at the thought of being that connected with their smartphone, but the benefits may be too good to pass up.

I can easily imagine an interface that not only helps people type messages faster, but provides better security than passwords, fingerprints, or facial recognition. Hackers might be able to crack a password, but brain signals would pose a far more daunting challenge.

This kind of interface also opens the door to a more intimate forms of communication. It’s one thing to just send texts and emails with our phones. What if we could send codes through brain implants that actually convey feelings and emotions? Instead of emojis, we could just send a coded message from one implant to another that conveys anything from happiness to shock to sarcasm.

That level of communication wouldn’t just be groundbreaking. It would change the way human beings interact. Again, it would be somewhat rudimentary at first, but with refinement, it could open entirely new channels for those who take advantage of this technology.

These are just some of the possibilities. The implications for the distant future are big, but the possibilities for the near future are just as tantalizing. Right now, the near-term goal revolves around helping brains interact with computers. At the moment, those computers are not advanced AIs. When that time comes, though, we’ll at least have a channel for that interaction.

These are exciting times, indeed. This is not just some eccentric billionaire’s crazy vision. This is really happening. There have been many technological advances that have changed our lives, but this technology may ultimately change much more than that.

6 Comments

Filed under Artificial Intelligence, futurism, Neuralink, Sexy Future