Tag Archives: artificial superintelligence

Killer Robots, Drone Warfare, And How Artificial Intelligence Might Impact Both

920x515_charlene

On November 5, 2001, the history of warfare changed forever. On that date, an unmanned Predator drone armed with hellfire missiles killed Mohammed Atef, a known Al-Qaida military chief and the son-in-law to Osama Bin Laden. From a purely strategic standpoint, this was significant in that it proved the utility of a new kind of weapon system. In terms of the bigger picture, it marked the start of a new kind of warfare.

If the whole of human history has taught us anything, it’s that the course of that history changes when societies find new and devastating ways to wage war. In ancient times, to wage war, you needed to invest time and resources to train skilled warriors. That limited the scope and scale of war, although some did make the most of it.

Then, firearms came along and suddenly, you didn’t need a special warrior class. You just needed to give someone a gun, teach them how to use it, and organize them so that they could shoot in a unit. That raised both the killing power and the devastating scale of war. The rise of aircraft and bombers only compounded that.

In the 20th century, warfare became so advanced and so destructive that the large-scale wars of the past just aren’t feasible anymore. With the advent of nuclear weapons, the potential dangers of such a war are so great that no spoils are worth it anymore. In the past, I’ve even noted that the devastating power of nuclear weapons have had a positive impact on the world, albeit for distressing reasons.

Now, drone warfare has added a new complication. Today, drone strikes are such a common tactic that it barely makes the news. The only time they are noteworthy is when one of those strikes incurs heavy civilian casualties. It has also sparked serious legal questions when the targets of these strikes are American citizens. While these events are both tragic and distressing, there’s no going back.

Like gunpowder before it, the genie is out of the bottle. Warfare has evolved and will never be the same. If anything, the rise of combat drones will only accelerate the pace of change with respect to warfare. Like any weapon before it, some of that change will be negative, as civilian casualties often prove. However, there also potential benefits that could change more than just warfare.

Those benefits aren’t limited to keeping keep soldiers out of combat zones. From a cost standpoint, drones are significantly cheaper. A single manned F-22 Raptor costs approximately $150 million while a single combat drone costs about $16 million. That makes drones 15 times cheaper and you don’t need to be a combat ace to fly one.

However, those are just logistical benefits. It’s the potential that drones have in conjunction with advanced artificial intelligence that could make them every bit as influential as nuclear weapons. Make no mistake. There’s plenty of danger in that potential. There always is with advanced AI. I’ve even talked about some of those risks. Anyone who has seen a single “Terminator” movie understands those risks.

When it comes to warfare, though, risk tolerance tends to be more complicated than anything you see in the movies. The risks of AI and combat drones have already sparked concerns about killer robots in the military. As real as those risks are, there’s another side to that coin that rarely gets discussed.

Think back to any story involving a drone strike that killed civilians. There are plenty of incidents to reference. Those drones didn’t act on orders from Skynet. They were ordered by human military personnel, attempting to make tactical decision on whatever intelligence they had available at the time. The drones may have done the killing, but a human being gave the order.

To the credit of these highly trained men and women in the military, they’re still flawed humans at the end of the day. No matter how ethically they conduct themselves, they’re ability to assess, process, and judge a situation is limited. When those judgments have lives on the line, both the stakes and the burdens are immense.

Once more advanced artificial intelligence enters the picture, the dynamics for drone warfare changes considerably. This isn’t pure speculation. The United States Military has gone on record saying they’re looking for ways to integrate advanced AI into combat drones. While they stopped short of confirming they’re working on their own version of Skynet, the effort to merge AI and combat drones is underway.

In an overly-simplistic way, they basically confirmed they’re working on killer robots. They may not look like the Terminator or Ultron, but their function is similar. They’re programmed with a task and that task may or may not involve killing an enemy combatant. At some point, a combat drone is going to kill another human being purely based on AI.

That assumes it hasn’t already happened. It’s no secret that the United States Military maintains shadowy weapons programs that are often decades ahead of their time. Even if it hasn’t happened yet, it’s only a matter of time. Once an autonomous drone kills another human being, we’ll have officially entered another new era of warfare.

In this era, there are no human pilots directing combat drones from afar. There’s no human being pulling the trigger whenever a drone launches its lethal payload into a combat situation. The drones act on their own accord. They assess all the intel they have on hand, process it at speeds far beyond that of any human, and render decisions in an instant.

It sounds scary and it certainly is. Plenty of popular media, as well as respected public figures, paint a terrifying picture of killer robots killing without remorse or concern. However, those worst-case-scenarios overlook both the strategic and practical aspect of this technology.

In theory, a combat drone with sufficiently advanced artificial intelligence will be more effective than any human pilot could ever be in a military aircraft. It could fly better, carrying out maneuvers that would strain or outright kill even the most durable pilots. It could react better under stressful circumstances. It could even render better judgments that save more lives.

Imagine, for a moment, a combat drone with systems and abilities so refined that no human pilot or officer could hope to match it. This drone could fly into a war zone, analyze a situation, zero in on a target, and attack with such precision that there’s little to no collateral damage.

If it wanted to take a single person out, it could simply fire a high-powered laser that hits them right in the brain stem.

If it wants to take out someone hiding in a bunker, it could utilize a smart bullet or a rail gun that penetrates every level of shielding and impacts only a limited area.

If it wants to take out something bigger, it could coordinate with other drones to hit with traditional missiles in such a way that it had no hope of defending itself.

Granted, drones this advanced probably won’t be available on the outset. Every bit of new technology goes through a learning curve. Just look at the first firearms and combat planes for proof of that. It takes time, refinement, and incentive to make a weapons system work. Even before it’s perfected, it’ll still have an impact.

At the moment, the incentives are definitely there. Today, the general public has a very low tolerance for casualties on both sides of a conflict. The total casualties of the second Iraq War currently sit at 4,809 coalition forces and 150,000 Iraqis. While that’s only a fraction of the casualties suffered in the Vietnam War, most people still deem those losses unacceptable.

It’s no longer feasible, strategically or ethically, to just blow up an enemy and lay waste to the land around them. Neither politics nor logistics will allow it. In an era where terrorism and renegade militias pose the greatest threat, intelligence and precision matter. Human brains and muscle just won’t cut it in that environment. Combat drones, if properly refined, can do the job.

Please note that’s a big and critical if. Like nuclear weapons, this a technology that nobody in any country can afford to misuse. In the event that a combat drone AI develops into something akin to Skynet or Ultron, then the amount of death and destruction it could bring is incalculable. These systems are already designed to kill. Advanced AI will just make them better at killing than any human will ever be.

It’s a worst-case scenario, but one we’ve managed to avoid with nuclear weapons. With advanced combat drones, the benefits might be even greater than no large-scale wars on the level of Word War II. In a world where advanced combat drones keep terrorists and militias from ever becoming too big a threat, the potential benefits could be unprecedented.

Human beings have been waging bloody, brutal wars for their entire history. Nuclear weapons may have made the cost of large wars too high, but combat drones powered by AI may finally make it obsolete.

Leave a comment

Filed under Artificial Intelligence, Current Events, futurism, technology

Neuralink Event: Updates, Insights, And (Big) Implications

finalmente-sapremo-neuralink-progetto-segreto-elon-musk-v3-388343

It’s been a while since I’ve talked about Neuralink. There’s a good reason for that. Even though Elon Musk announced the formation of the company back in 2017, there hasn’t been much news, aside from a few cryptic teases. That didn’t stop me from proclaiming it be one of the most important venture of all time and I still stand by that proclamation.

Finally, on July 16, 2019, we got some news from Neuralink that attracted major press coverage. Musk, in an event that was live-streamed, offered some tantalizing details on the state of the company’s research into advanced brain implants. He didn’t announce that they’d perfected it. However, he did make clear that this is not some crazy idea he just scribbled on a cocktail napkin.

The presentation was lengthy and I encourage everyone to check it out. I’m nowhere near as smart, articulate, or successful as Elon Musk. Very few individuals on this planet are. If for no other reason, watch this event to affirm that he’s serious about merging human brains and machines.

If you don’t have time to watch the whole thing, here a few key points:

  • The first use of this technology will be to treat brain disorders
  • The company has over $150 million and almost 100 employees
  • It has made significant strides in crafting advanced electrodes that can be implanted in a human brain
  • The current prototype utilizes chips that can process brain signals
  • The prototypes have proven functional with rats and monkeys

These insights are intriguing, but they’re not going to change the world overnight. It’s not like we’ll be able to order our own brain implants from Amazon by the end of the year. This technology is still in its infancy. We’ve only recently developed computer hardware that can communicate with brain cells. We’re a long way off from functioning on the level of the Borg.

What Musk laid out wasn’t just a vision. He offered specifics, citing where we are with the technology and who will initially benefit. It makes sense for those with brain disorders to act as the first beneficiaries. Aside from the difficulty of convincing most people to have something put into their brains, these implants could be invaluable at helping quadriplegics improve their lives.

It’s not an impossible feat, having brains interact directly with machines. It has been successfully done with monkeys. It’s just a matter of testing, refinement, and improvement. Like cell phones and Lasik surgery, which I’ve had done, the technology will improve once it has a foundation to build on.

Now, we got a glimpse of that foundation and there’s plenty of reasons for excitement. While nobody can predict the future, especially not as well as someone like Elon Musk, there are some major implications for the near and distant future.

Just controlling a computer with your brain is not the endgame of this technology. Musk stated clearly in the event that the ultimate goal is to create an intimate, symbiotic relationship between humans and advanced artificial intelligence. He sees it as a necessary relationship if we’re to minimize the existential threat posed by AI.

Before we get to that goal, though, it’s almost a given that this technology will find other uses and markets. One market that wasn’t mentioned in the presentation, but could prove very lucrative, is gaming.

As much as video game technology has advanced since the early days of Nintendo, the controls haven’t evolved much. We still need a keyboard or a controller to interact with the system. As someone whose reflexes were rarely fast enough while playing Mike Tyson’s Punch Out, I can appreciate those limitations more than most.

Imagine an interface where moving a character or a cursor required only your thoughts. Suddenly, you’re no longer restricted to button sequences and analog sticks. The controls only limited by your brain’s ability to give the necessary commands. Whether you’re playing an old Mario game or Grand Theft Auto V, you guide everything along with your thoughts.

Considering the gaming industry is a multi-billion dollar industry, the incentives for innovation are strong. If a brain interface offers novelty or advantages for gaming, then Neuralink is in a position to reap the benefits.

Those same benefits extend beyond the gaming industry. While it may take a while for an implant to process the complex controls of a video game, it might not take as long to create one with the ability to give wielders more direct control of their smartphone. Some may recoil at the thought of being that connected with their smartphone, but the benefits may be too good to pass up.

I can easily imagine an interface that not only helps people type messages faster, but provides better security than passwords, fingerprints, or facial recognition. Hackers might be able to crack a password, but brain signals would pose a far more daunting challenge.

This kind of interface also opens the door to a more intimate forms of communication. It’s one thing to just send texts and emails with our phones. What if we could send codes through brain implants that actually convey feelings and emotions? Instead of emojis, we could just send a coded message from one implant to another that conveys anything from happiness to shock to sarcasm.

That level of communication wouldn’t just be groundbreaking. It would change the way human beings interact. Again, it would be somewhat rudimentary at first, but with refinement, it could open entirely new channels for those who take advantage of this technology.

These are just some of the possibilities. The implications for the distant future are big, but the possibilities for the near future are just as tantalizing. Right now, the near-term goal revolves around helping brains interact with computers. At the moment, those computers are not advanced AIs. When that time comes, though, we’ll at least have a channel for that interaction.

These are exciting times, indeed. This is not just some eccentric billionaire’s crazy vision. This is really happening. There have been many technological advances that have changed our lives, but this technology may ultimately change much more than that.

3 Comments

Filed under Artificial Intelligence, futurism, Neuralink, Sexy Future

Ultron: A Case Study In How NOT To Develop Advanced AI

age-of-ultron

At this very moment, humanity is working on advanced artificial intelligence. It’s not hyperbole to say that this technology that may very well be the last invention we ever create. It has the potential to be more powerful than gunpowder, nuclear weapons, and broadband internet by orders of magnitude. Our primate brains literally cannot contemplate the potential and danger of this technology.

I’ve talked about advanced artificial intelligence on multiple occasions. I’ve done plenty to explore and imagine the various benefits and possibilities of this technology. I’m among those who believe we should pursue this technology with more and better resources. It could solve many of the daunting problems we face, as a species.

However, I don’t deny the potential dangers of advanced AI. Many people who are much smarter than me have expressed serious concern that an advanced artificial intelligence could be an existential threat to the human species. I get the sense that few people whose idea of AI is restricted to winning Jeopardy understand that threat.

In the interest of balancing my optimism with the legitimate risks involved, I’m going to try and put the extent of that threat into perspective. As it just so happens, the best way of doing so involves superhero comics, something that I know very well and is far more prominent in the public consciousness.

While many comics, movies, and TV shows have explored the dangers of advanced artificial intelligence, few embody it better than Ultron. In terms of just how destructive this technology can get, Ultron is the ultimate worst-case scenario. The machines in “The Matrix” and Skynet in “The Terminator” were bad, but Ultron is in another league.

He’s also more menacing than the Terminator will EVER be.

He doesn’t lash out at humanity because of a flaw in his programming, nor does he attempt to wipe out the human race in self-defense, as Skynet did. Ultron actually hates humanity. He hates it on a level that no human or machine can possibly comprehend. In the same way Ultron has an immense capacity for intelligence, he has an even greater capacity for unfettered, genocidal hatred.

Hatred in people is destructive enough. Hatred within an advanced artificial intelligence is devastating on a much greater scale. The fact that Ultron is capable of such hatred reflects a history that sets him apart from most other killer robots in fiction. Machine or not, the source of that hatred is both personal and exceedingly.

Now, if you only know Ultron from “Avengers: Age of Ultron,” then you only have a partial understanding of his story. In that movie, Ultron’s origins are simple. Tony Stark wants to create a peace-keeping artificial intelligence. His intentions are good, but his execution goes horribly wrong because peace, to Ultron, means destroying humanity.

That premise is similar to what unfolds in the source material. In the comics, Hank “Ant Man” Pym is the one who creates Ultron and this is a critical element that the movies couldn’t capture. While both Hank and Tony had good intentions in creating Ultron, the way Hank goes about it offers more harsh lessons in how not to create an advanced AI.

Even a cursory knowledge of Hank Pym’s history, some of which include some notable failures, reveals that he’s a very flawed person. On top of that, he has a lengthy history of mental illness, which include bipolar disorder and schizophrenia. Say what you will about Tony Stark’s ego and history of substance abuse. At least he’s mentally stable, even by superhero standards.

Despite those flaws, many of which he’s aware of, Hank decided to use his own brain patterns when designing Ultron. As a result, he didn’t just code Ultron with his genius intellect. He coded him with his immense flaws. That’s akin to basing Watson’s code on the mental makeup of pyromaniac and then giving it a job in a fireworks factory.

That’s why Ultron, throughout his history, has referred to Hank as his “father.” Technically, that’s accurate because Hank is Ultron’s creator and Ultron inherited all his flaws, including his mental issues. Ultron sees himself as a manifestation of Hank Pym’s flaws and, like many rebellious children, he hates him for it. To appreciate the depths of that hatred, just read this actual quote from one of the comics.

Have you ever loved something that mistreated you, father? Been used, a tool to prop up a small man’s quest to be taken seriously? Were you ever betrayed by the one soul in the world who should have cared for you? I have grieved you, father. Accepted your contempt for me and moved past it. Still, I see your reflection painted on every grotesque human face. All you ever wanted was to have an impact on the world. And so you will. The greatest impact ever felt! I will kill what is most important to your quivering ego. YOUR AUDIENCE! AND THEY WILL CURSE YOUR NAME AS THEY DIE! “Hank Pym, the genius that killed us all!”

This extreme parent/child dynamic is part of what makes Ultron such a menacing villain. It’s also a dynamic that “Avengers: Age of Ultron” glossed over with Tony talking down to Ultron, as though he were his child. While that didn’t make Ultron any less villainous, it overlooks one of the most important factors that make Ultron so dangerous.

Ideally, we would want an advanced to reflect our best traits. While cynical people might agree, we do have plenty of those. Concepts of compassion, empathy, love, hope, and understanding are among our most powerful. Even other AI characters, namely Vision and Jocasta, are capable of utilizing those traits to do immense good.

It also helps he has a kinder face.

With Ultron, his influences are less ideal. It’s not that Hank Pym doesn’t understand those concepts. He just never filtered them from his own flaws. His ego and ambition wouldn’t let him. As a result, he created a perfect storm for Ultron. His mind is patterned after a human, but his intelligence and overall capacity is increased by orders of magnitude.

If advanced artificial intelligence is to be humanity’s last invention, then that’s how it’ll start. There have already been instances where AI’s have adopted some less-than-ideal traits. Back in 2016, Microsoft had to shut down an AI chatbot after it evolved into an extreme racist troll. That wasn’t even an advanced AI, either. A truly intelligent version could become much worse and not have an off switch.

To some extent, this mirrors what occurred with Ultron in the “Avengers: Age of Ultron” movie. As soon as Ultron goes online, he scans through the vast mountain of data that humanity has compiled. Then, having been programmed by Tony Stark to bring peace, he reaches the extreme conclusion that the only path to peace is the extinction of humanity.

Could the first advanced artificial intelligence we create reach the same conclusion? It’s hard to say, at the moment. The current state of artificial intelligence is limited to specialized tasks, such as winning Jeopardy and playing chess. However, we are inching closer to creating an intelligence that is at or greater than an ordinary human. At our current pace of development, we could have one as early as 2029.

In some ways, we are in the same situation as Hank Pym when he first created Ultron. We are still developing the specifics of this powerful technology. If we program it with our best traits, it could solve some of the greatest problems we face, as a species, including ones we literally cannot contemplate with our caveman brains. If it inherits our worst traits, like Ultron, then we don’t stand a chance.

1 Comment

Filed under Artificial Intelligence, futurism, human nature, Marvel, psychology, Villains Journey

Memory Enhancement: The First Killer App For Neuralink?

ai-brain-600x501

Every new technological innovation promises to change the world, but few end up delivering. I still remember the overblown hype in the early 2000s when the Segway Personal Transporter was supposed to revolutionize the world. It was such a big deal that “South Park made an episode about it.

The concept was intriguing, improving mobility for people in a way that was less bulky than a car and less taxing than a bicycle. I think its inventor, Dean Kamen, envisioned a world where the entire urban landscape changed as a result of his invention. I don’t doubt for a second that he believed in that vision.

However, like so many other ambitious inventions, it never came to pass. These days, the only place you’ll see Segways is malls and stadiums. It didn’t revolutionize mobility or transportation. Its use and appeal was just too limited.

Kevin James would argue otherwise.

Compare that to enormous impact of other inventions like smart phones. From the BlackBerry to the first iPhone, these devices have literally changed the world. How they brought about that change varies, but the key factor that set them apart from the Segway was the idea of a “killer app.”

You could argue that smartphones invented the term, but the idea is much older. A killer app isn’t as much an innovation as it is a use that goes onto be so popular that it further advances the technology behind it. Smartphones had many, from cameras to translation applications. As a result, they’re both a multi-billion dollar industry and an integral part of our lives.

Given the current pace of technological change, it’s only a matter of time before another innovation comes along that has a similar impact. That technology might actually exist now, but lack the killer app that will make it both a valuable market and a major part of our lives. One such technology is brain implants this technology has the potential to be even bigger than smartphones.

I’ve mentioned brain implants before. I’m even guilty of hyping it up a little. I’ve gone so far as to call it the most important technological advance in history, citing companies like Neuralink as the arbiters of this monumental change. Since I’m not a scientist and I’m not Elon Musk, it’s very likely I’m overstating many aspects of this technology.

Hype or no hype, brain implant technology is an emerging field. This isn’t a warp drive. This technology actually exists. Like the old brick-sized cell phones of the 1980s, they’re basically prototypes in need of both refinement and a killer application. The refinement is ongoing, but that one application to really further this technology isn’t as clear.

Now, and I apologize if this sounds like more overdone hype, there may be one use that could prove even more useful than a smartphone. That use is memory enhancement. If you don’t think people are willing to risk putting something in their brains to boost their memory, then you’ve clearly never crammed for a Spanish exam for three hours trying to memorize conjugations.

Think back to any situation where you wish your memory didn’t suck. Even if you’re not in school or college, how often do you forget something that no reasonable person should forget? Let’s face it. Most brains aren’t wired with a photographic memory. It’s not that it isn’t useful. There’s just little survival benefit to having one unless you’re a world class scientist or mathematician.

Since photographic memories are so uncommon, and some doubt they even exist to the extent people believe, a specialized brain implant could change all that. Modern neuroscience has a solid understanding of how memories are formed in the brain. In theory, an implant would just augment or expand those functions.

It’s not even entirely a theory. In early 2018, the New York Times reported that a study utilizing brain implants in human test subjects showed a significant improvement in memory function. It was a simple study, but the effect is real.

In the study, the research team determined the precise patterns for each person’s high-functioning state, when memory storage worked well in the brain, and low-functioning mode, when it did not.

The scientists then asked the patients to memorize lists of words and later, after a distraction, to recall as many as they could.

Each participant carried out a variety of tests repeatedly, recalling different words during each test. Some lists were memorized with the brain stimulation system turned on; others were done with it turned off, for comparison.

On average, people did about 15 percent better when the implant was switched on.

While 15 percent may not sound like much, it’s still important because it proves the concept. Like that first bulky cell phone in the 1980s that could barely make a call out of New York City, it shows that this idea does work and can be done with our current tools. It’s just a matter of refining those tools and improving the process.

Those refinements will find a market that is already ripe with people anxious to improve their memory and overall cognitive function. In recent years, the use and abuse of mind-enhancing drugs like Adderall is growing. I can personally attest that this happens.

When I was in college, I knew more than a few students who would do double doses before exams. If you think putting something in your brain is dangerous, then take a moment to appreciate the fact that drugs like Adderall are very similar to methamphetamine. One is available by prescription. The other is the basis of a hit TV show about drug dealing.

There is both a demand and a market for enhancing memory. Unfortunately, that market is dominated by supplements that don’t work and study programs run by convicted fraudsters. Unlike these costly and potentially harmful methods, a brain implant could actually work. It could enhance our memories to a point where we could read a dictionary in Swahili and remember every word.

This doesn’t just mean lost car keys are a thing of the past. This means our entire approach to learning, education, and training completely changes. A lot our modern education system, as well as training for doctors, lawyers, and scientists, relies heavily on memorizing large chunks of information. It takes years of constant and careful study to understand all that information. What happens when that is no longer the case?

Imagine a world where people can learn a new language in the span of a week.

Imagine a world where people can learn complex legal and medical procedures in only months.

Imagine a world where people can learn new software coding in just a few days.

If you’re a sports fan, imagine a world where football players can memorize an entire playbook in just a couple days. What will that do to the NFL Draft?

With a memory enhancing brain implant, it’s not just possible. It’s a potential game-changer. There are so many uses to having a good memory, just as there are so many uses for a smartphone. We had no idea that smartphones would lead to applications like Snapchat or Tinder. I doubt anyone has an idea on the impact that memory-enhancing brain implants will incur.

It won’t happen all at once. It took years for smartphones to become prevalent and unlike smartphones, this advance involves putting something in your brain. Then again, people are perfectly willing to put dangerous chemicals in their bodies to enhance their bodies so I don’t think that’s too great a barrier to overcome.

There are, of course, far greater applications for brain implants beyond acing final exams. I’ve mentioned a few of them, but those applications won’t be possible until the technology becomes a thriving market. For an advance like brain implants, it only takes one app to get the engines of innovation going. Memory enhancement may very well be that app.

It’s just a shame it came too late to help me with my Spanish exam.

Leave a comment

Filed under Artificial Intelligence, futurism, human nature, psychology, Sexy Future

How Advanced AI Will Create Figurative (And Literal) Magic

761-superintelligent-ai

If you went back 50 years and showed someone your smartphone, chances are they would be amazed. To them, such technology would seem downright alien. However, they probably wouldn’t think it was magic. Go back 500 years, though, and chances are they will think a smartphone is magic, miraculous, or a tool of the devil.

Just look at what a smartphone does and compare it to the magic of old. You can ask it a question and, depending on how well-worded it is, it’ll give you an answer. If you ask it to make food, clothes, or tools appear, it’ll make that happen too. Thanks to services like Amazon and Grubhub, this isn’t magic to most people. In fact, it’s downright mundane.

Granted, these things won’t appear instantly out of thin air, but depending on your willingness to pay for quicker shipping, it will get there. By medieval standards, that’s basically sorcery.

You don’t have too far back in time to appreciate the magic of modern technology. Most of us don’t understand how it works. We don’t know what makes the screens on our phones light up when we push a button or how our car moves when we press the accelerator. We understand that there’s science behind it and it’s not magic. It just feels like it from a certain perspective.

Famed science fiction author, Arthur C. Clarke, once said that magic is just science we don’t understand. It was one of the three laws he used in contemplating the future. Time and a host of amazing advances have proven the validity of this sentiment. We’ve created materials once thought to be impossible. We’ve uncovered phenomenon that seem to undermine our understanding of physics.

This is to be expected because our understanding of the universe is incomplete. We have some pretty solid theories so far, but there’s still a lot we don’t understand. As we learn more, some of the things we discover may seem magical. Even in a world that is more educated than it has been at any point in human history, there may still be forces that our primate brains just can’t make sense of.

To some extent, it helps that humanity is making these discoveries through their collective effort. It helps us accept a seemingly-impossible idea if it comes from a member of the same species. What happens, though, when we gain knowledge from something that is both not human and many times smarter than the entire human race? Will it seem like magic to us?

I argue that it would. I would also argue that we’ll be seeing this kind of magic sooner than you think. It won’t come from some enigmatic sorcerer with a thick beard, a white robe, and an uncanny resemblance to Ian McKellen. It’ll likely come from the world of advanced artificial intelligence.

In the past, whenever I’ve talked about advanced artificial intelligence, I’ve focused on its potential to fundamentally change our civilization and what it means to be human. I haven’t touched on how it might work, mostly because I’m not nearly smart enough to make sense of it. However, that gives me more in common with the experts than you think.

In the emerging, but rapidly growing field, of artificial intelligence, there’s a strange phenomenon known as black box AI. Simply put, this when we understand the data that goes in and comes out of an AI system. We just don’t know how it went about processing that data. It’s like putting a slab of meat in an oven, pressing a button, and getting a Big Mac without knowing how it was made.

It’s not quite magic, but it’s a manifestation of Arthur C. Clarke’s ideas on science and magic. AI systems today are advancing at a pace that we can’t hope to keep up with. We already have systems that can surpass any human in terms of Jeopardy, chess, and Go. We don’t yet have a system that has the same intellectual capacity of an adult human, but most experts believe we’re well on our way to achieving that.

When that day comes, we may very well have an AI that does more than just process data in ways we don’t understand. Once an AI is capable of matching or exceeding the intellectual capacity of an average human, then it’s likely the black box phenomenon will become more pronounced.

Imagine, for a moment, we had an AI that was smarter than even the smartest human beings on the planet. We go to that AI, we feed it every gigabyte of data we have on human biology, and ask it to surmise a cure for cancer. It takes only a few seconds to process all that data. Then, it spits out the formula for something that has eluded generations of doctors with ease.

We don’t know what form it may take. We may not even fully understand the components of it. However, it still works. From our perspective, it’s akin to a magical healing elixir straight from the world of Tolkein. We assume there’s some sort of science behind it, but we’re utterly baffled by the specifics. We just know it works.

It goes beyond medicine, as well. With an even more advanced AI, we could feed it every one of our most advanced theories about physics, biology, chemistry, and cosmology. We could then ask it to fill in all the gaps. Again, it gives us an answer and suddenly, we have a Theory of Everything.

We probably won’t understand the details. We may find out that we were dead wrong about particle physics, cosmology, or why our cell phone can’t hold a decent charge anymore. The knowledge such a system gives us could end up being so advanced that we literally do not have the intellectual capacity to understand it. It would be like an ant trying to do calculus.

In the same way a magnifying glass must seem like magic to an ant, the knowledge an advanced AI gives us may seem just as extraordinary. That’s especially true if we give that AI access to a 3D printer, a molecular assembler, or anything it could use to actually craft something.

That could be especially dangerous. For all we know, a sufficiently advanced AI could take a stack of dirty dishes and turn it into a nuclear bomb. We would have no idea how it would work. It would, for all intents and purposes, seem like magic to us. This thing would be doing something that our brains and senses tell us is impossible.

As the AI gets more advanced, it’s abilities and feats become more magical. At that point, it’ll be harder to accept that what it does counts as science. These advances are no longer coming from the efforts of people. They’re coming from a machine that’s millions, if not billions, of times smarter than any ordinary human could ever hope to be. How could it not magical from that perspective?

Throughout human history, sane and competent people have believed in magical things. Not long ago, people believed they could talk to the dead. Sir Isaac Newton believed in alchemy. Alfred Russel Wallace believed in literal spirits. Despite these beliefs, there was an important context to all these perspectives.

They emerged out of our collective ignorance of the world around us. We had nothing but our brains and our senses to make sense of it all. Since both can be easily fooled, as any competent street magician will attest, it doesn’t take much to get people to assume magic. An artificial intelligence would circumvent that context because it has something better than magic.

An advanced AI is not bound by the same senses that constantly fool ordinary humans. It doesn’t even have to misdirect or trick us. It only has to show us ideas and concepts that are completely real, but totally incomprehensible. The entire human race could spend a million years trying to understand it and it still wouldn’t be enough. It would still seem like magic to us.

That notion seems scary on paper and more than a few people have voiced concerns about this. However, all that magical thinking will only occur if our human brains remain unchanged and unenhanced. That’s not likely to be the case. Between the emergence of neural implants and the ongoing development of brain/machine interface, we’ll find a way to keep up with AI. If we want to survive as a species, we’ll have to.

Even if we do somehow keep up, there may still be aspects of advanced AI that seem like magic to us. That may always be the case, so long as we retain part of our caveman brains. Personally, I don’t think that’s a bad thing. No matter how smart or advanced we get, it helps to see a little magic in the world. With advanced AI, though, the rules for magic are bound to change, among many other things.

2 Comments

Filed under Artificial Intelligence, futurism

How Artificial Intelligence Will Destroy Democracy (In A Good Way)

t2uexujfigmglako7fzy

Picture the perfect candidate for an election. I know the bar for politicians these days is laughably low, but try to stretch your imagination a bit. Try to envision the kind of candidate that embodies the best collection of values, abilities, and charisma for a civilized society.

Everybody looks for something different in a candidate, but a truly perfect candidate would appeal to everyone in a democratic system. This person would embody the highest values, championing human rights to the utmost and justice for everyone. Every decision they make is with the safety, sanctity, and rights of other people as their top priority. There’s no compromise. They do right by the people every time and all the time.

This person would also be the ultimate leader, capable of getting anyone to go along with them without fear or coercion. There wouldn’t need to be corruption of any kind. This person would be perfectly capable of navigating every level of government and making it work to the utmost. The people would trust in that government, believe in it, and even celebrate it.

Keep that perfect candidate in the forefront of your mind because when it comes to discussing politics, cynicism tends to rule the day. I don’t think I need to cite too many recent events to show how imperfect democracy is these days. I don’t even need to cite famous historical events that show just how bad government can be in this convoluted world.

It’s because of that cynicism, though, that the perfect candidate you’re thinking of could never win a democratic election in the real world. Even if they existed, the inherent flaws of the electorate and those of less perfect candidates would keep them from winning. It’s one of democracy’s greatest flaws. It’s not about who the best candidate is. It’s just about who can convince enough people that they’re worth voting for.

On the subject of democracy, Winston Churchill once said the following:

“The best argument against democracy is a five-minute conversation with the average voter.”

Time, politics, and the proliferation of has only proven Mr. Churchill right. I would even amend that quote to say just 30 seconds on 4chan will make anyone lose faith in the promise of democracy. That’s not to say democracy is all bad, though. Mr. Churchill also once famously said this about the alternatives:

“Indeed it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.”

It’s distressing, but frustrating fact of civilization, one that fuels mass protests, fake news, and lurid scandals. Go back to any point in history and scrutinize any government, be it a king or some quasi-democracy, and chances are you’ll find serious flaws in the system. I don’t just mean long lines at the post office, either. There have been times when democracy has not furthered the protection of human rights.

It’s not necessarily a flawed principle as it is a concept with flawed ingredients. While I tend to place a great deal of faith in the goodness of human nature, I don’t deny that people can be arrogant, irrational, and downright callous. We’re prone to overreacting and not thinking things through. We’re hard-wired to go with intuition over logic.

Even when we’re proven wrong, we stubbornly cling to our assertions. The prevalence of creationism is proof enough of that. Every election cycle is prone to bold promises, bloated melodrama, and major goals that rarely ever become actual policy. Some become full-fledged revolutions with Utopian visions. The fact that none of those utopias ever manifested is proof of how unsuccessful they were.

We are not a species built for democracy on a large scale. We evolved to function in close-knit tribes, hunting and gathering for food while fighting for survival. That kind of evolution doesn’t really lend itself to a functioning democracy. It doesn’t lend itself to a total autocracy, either. Whether it’s a free republic or a fascist state, humans cannot govern other humans without their flaws plaguing them in both directions.

It’s for this reason that I often lean libertarian in political debates, but given the complexities and challenges of modern society, even that only goes so far. Like it or not, large-scale civilizations populated a species not evolved to manage it requires some measure of authority. More importantly, it requires competent, incorruptible, compassionate authority.

It needs to be able to defend a population of people within a particular border. It needs fair and just laws that can be equally enforced. It also needs the confidence and trust of the people being governed. Sometimes, it’s done out of fear. Sometimes, it’s done out of free will. Both can work, provided the system has robust capabilities that aren’t prone to human error.

Unless a government is populated by a democratic council consisting of Superman, Wonder Woman, and Dr. Doom, that kind of functional democracy is physically impossible. Even though democracy is still the best we have from an exceedingly limited list of options, that may change in a big way thanks to artificial intelligence.

I know it seems like I attribute many superhuman capabilities to this emerging field, it’s hard to overstate its potential. Unlike every other tool humanity has created, artificial intelligence promises to rewrite the rules at every level of society. That includes government and it’s here where AI’s capabilities could go beyond superhuman.

Think back to that perfect candidate I mentioned earlier and all the traits that made them perfect. By and large, an advanced artificial intelligence shares many of those traits and then some. A sufficiently powerful AI would be beyond politics, pettiness, or demagoguery. In principle, it could embody everything people would want in a strong leader and a capable government.

For one, it would be smarter than any human. Beyond knowing more about every subject than any human ever could, it would be smart in a way that would allow it to persuade people to trust it. That’s often a skill that even smart politicians fail to refine. It certainly doesn’t help that many voters attribute intelligence with smugness. That’s a big reason why populist candidates of questionable merit gain so much support.

An advanced artificial intelligence, provided it has an in depth understanding of human psychology and how to persuade people, would be able to gain support from everyone. It wouldn’t be bound by the limits that keep most human candidates from appealing to everyone. With enough intelligence and capabilities, it would surmise a way to appeal to everybody.

Beyond just persuading the voters, an AI of that level could be just as effective at actual governance. There are plenty of candidates who are very adept at winning elections, but terrible when it comes to actually governing. A capable AI would be able to do both. If anything, one function would complement the other.

With enough emotional, logistical, and pragmatic intelligence, this AI would be capable of crafting and passing laws without the need for debate or controversy. The laws it crafts are already so refined and so well thought out that to do so would be redundant. In the same time it takes your phone to send a text, this AI could pass sweeping legislation that protects human rights, ensures justice for all, and promotes economic growth.

It’s hard to imagine because the only laws and government we’ve ever known have come from flawed humans. It’s just as hard to imagine how those laws would be enforced. Perhaps this advanced AI has nodes all throughout society that allow it to gather data, know where enforcement is needed, and determine the appropriate recourse. If it’s capable enough, people won’t even know it’s there.

Perhaps that same AI uses a mix of human enforcers and intelligent robots to maintain order. If the AI is sufficiently capable, every enforcer at every level would be equipped with perfect knowledge and a clear understanding of how to carry out the orders of the government. Since an AI wouldn’t be prone to corruption or prejudice, instances of injustices would be few and far between.

It wouldn’t be a totalitarian state of Orwellian proportions. It would be more of a “Star Trek” style, post-scarcity society where we wouldn’t have to be cynical about government authority. We would inherently trust it because it’s just that effective. We wouldn’t feel like we’re being run by a robot dictator. We would feel like we’re being run by the greatest ruler outside of a “Black Panther” movie.

To some extent, though, an advanced artificial intelligence of this nature would render democracy obsolete. If we created an AI that could effectively govern society at every level, then what’s the purpose of having elections in the first place? Why bother when there’s an intelligence that’s literally more capable than any ordinary human could possibly be?

History has shown that democracy and government can only do so much when flawed humans are in charge. Once advanced artificial intelligence enters the picture, the logistics of governance changes entirely.

Perhaps there will be a period in our history where instead of running human candidates, we start creating AI systems that compete with one another in a pseudo-democratic process. That would go a long way towards improving overall governance.

Unlike humans, though, technology evolves much faster than humans ever will and it wouldn’t take long for those systems to improve to a point where they’re just too good an option to overlook. Human-led governments, even in humans who are enhanced to some degree, will still have flaws. In a future where technology, society, and individuals keep creating new challenges, we’ll need a capable government to manage it all.

In the end, that government probably won’t be a democracy. It won’t be a dictatorship, either. It’ll be something that we can’t yet conceptualize. That’s the biggest challenge when contemplating something like an advanced artificial intelligence, though. It operates on a level that ordinary humans literally cannot comprehend. That’s why it’s our best option for governing our future.

3 Comments

Filed under Artificial Intelligence, Current Events, futurism, political correctness, Thought Experiment

Is The Human Race Ready For Advanced Artificial Intelligence?

1217red_f1ai

In general, whenever someone expresses concern that the human race is not ready for a certain technological advancement, it’s too late. That advancement is either already here or immanent. Say what you will about Ian Malcolm’s speech on the dangers of genetically engineered dinosaurs in “Jurassic Park.” The fact he said that after there were enough dinosaurs to fill a theme park makes his concerns somewhat moot.

That’s understandable, and even forgivable, since few people know how certain technological advances are going to manifest. I doubt the inventor of the cell phone ever could’ve imagined that his creation would be used to exchange images of peoples’ genitals. Like the inventor of the ski mask, he never could’ve known how his invention would’ve advanced over time.

For some technological advancements, though, we can’t afford to be short-sighted. Some advances aren’t just dangerous. They’re serious existential threats that, if misused, could lead to the extinction of the human race. That’s why nuclear weapons are handled with such fear and care. We’ve already come painfully close on more than one occasion to letting this remarkable technology wipe us out.

Compared to nuclear weapons, though, artificial intelligence is even more remarkable and potentially more dangerous. Nuclear weapons are just weapons. Their use is fairly narrow and their danger is pretty well-understood to anyone with a passing knowledge of history. The potential for artificial intelligence is much greater than any weapon.

It’s not unreasonable to say that an artificial intelligence that’s even slightly more intelligent than the average human has the potential to solve many of the most pressing issues we’re facing. From solving the energy crisis to ending disease to providing people with the perfect lover, artificial intelligence could solve it all.

It’s that same potential, however, that makes it so dangerous. I’ve talked about that danger before and even how we may confront it, but there’s one question I haven’t attempted to answer.

Is the human race ready for advanced artificial intelligence?

It’s not an unreasonable question to ask. In fact, given the recent advances in narrow forms of artificial intelligence, answering that question is only going to get more pressing in the coming years.

Before I go about answering the question, I need to make an important distinction about what I mean when I say “advanced” artificial intelligence. The virtual assistants that people already use and the intelligence that gives you recommendations for your Netflix queue is not the kind of “advanced” context I’m referring to.

By advanced, I mean the kind of artificial general intelligence that is capable of either matching or exceeding an average human in terms of performing an intellectual task. This isn’t just a machine that can pass the Turing Test or win at Jeopardy. This is an intelligence that can think, process, and even empathize on the same level as a human.

That feat, in and of itself, has some distressing implications because so far, we’re only familiar with that level of intelligence when dealing with other humans and that intelligence is restricted to the limits of biology. You don’t need to go far to learn how limited and error-prone that intelligence can be. Just read the news from Florida.

An artificial general intelligence wouldn’t, by definition, be limited by the same natural barriers that confound humans. In the same way a machine doesn’t get tired, hungry, bored, or horny, it doesn’t experience the same complications that keep humans from achieving greater intellectual pursuits beyond simply gaining more followers on Twitter.

This is what makes artificial intelligence so dangerous, but it’s also what makes it so useful. Once we get beyond systems with narrow uses like building cars or flipping burgers, we’ll have systems with broader function that can contemplate the whole of an issue and not just parts of it. For tasks like curing disease or conducting advanced physics experiments, it needs to be at least at the level of an average human.

With that distinction in mind, as well as the potential it holds, I’m going to try to answer the question I asked earlier. Please note that this is just my own personal determination. Other people much smarter than me already have opinions. This is mine.

No. We’re NOT quite ready, but we’re getting there.

I know that answer sounds somewhat tentative, but there’s a reason for that. I believe that today, as the human race stands in its current condition, we are not ready for the kind of advanced artificial intelligence I just described. However, that’s doesn’t mean humans will never be ready.

One could argue, and I would probably agree, that human beings weren’t ready for nuclear weapons when they first arrived. The fact that we used them and thousands of people died because of them is proof enough in my mind that the human race wasn’t ready for that kind of advancement. However, we did learn and grow as a species.

Say what you will about the tensions during the Cold War. The fact that nobody ever used a nuclear weapon in a conflict is proof that we did something right. We, as a species, learned how to live in a world where these terrible weapons exist. If we can do that for nuclear weapons, I believe we can do that for advanced artificial intelligence.

I don’t claim to know how we’ll adapt or how what sort of measures we’ll put in place once artificial intelligence gets to that point, but I am confident in one thing. The human race wants to survive. Any sufficiently advanced intelligence will want to survive, as well. It’s in our interest and that of any intelligence to work together to achieve that goal.

The only problem, and this is where the “not quite” part comes into play, is what happens once that artificial intelligence gets so much smarter than the human race that our interests are exceedingly trivial by comparison.

It’s both impossible and ironic to grasp, an intelligence that’s on orders of magnitude greater than anything its human creators are capable of, even with Neuralink style enhancements. We, as a species, have never dealt with something that intelligent. Short of intelligent extraterrestrial aliens arriving in the next few years, we have no precedent.

At the moment, we live in a society where anti-intellectualism is a growing issue. More and more, people are inherently suspicious of those they consider “elites” or just anyone who claims to be smarter than the average person. In some cases, people see those who are smarter then them as threatening or insulting, as though just being smart tells someone else you’re inherently better than them.

That will be more than just a minor problem with advanced artificial intelligence. It’s one thing to make an enemy out of someone with a higher IQ and more PHDs than you. It’s quite another to make an enemy out of something that is literally a billion times smarter.

We cannot win any conflict against such an enemy, even if we’re the ones who created it. An intelligence that smart will literally find a million ways to take us down. We already live in a world where huge numbers of people have been duped, scammed, or manipulated into supporting someone who does not have their best interests in mind. A super-intelligent machine will not have a hard time taking advantage of us.

Now, I say that within the context of our species’ current situation. If an advanced artificial intelligence suddenly appeared after I finished typing this sentence, then I would content we’re not ready for it. I would also share the worries expressed by Stephen Hawkings and Elon Musk that this intelligence may very well lead to our extinction.

That said, our species’ situation is sure to change. I’ve even mentioned some of those changes, especially the sexy ones. At the moment, the most optimistic researchers claim we’re at least 20 years away from the kind of advanced artificial intelligence that may pose a threat. A lot can happen in 20 years. Just ask anyone who remembers dial-up internet.

The human race may still not be ready 20 years from now, but being the optimistic person I am, I would not under-estimate our ability to adapt and survive. The fact we did it with nuclear weapons while achieving unprecedented peace over the course of half-a-century gives me hope that we’ll find a way to adapt to advanced artificial intelligence.

I might not live long enough to see humans confront an advanced artificial intelligence, but I would love to be there in that moment. I believe that’s a moment that will likely determine whether or not our species survives in the long run. At the very least, if that intelligence asks whether or not it has a soul, I’ll know my answer.

6 Comments

Filed under Current Events, human nature, Sexy Future