Tag Archives: Artificial Intelligence

Killer Robots, Drone Warfare, And How Artificial Intelligence Might Impact Both

920x515_charlene

On November 5, 2001, the history of warfare changed forever. On that date, an unmanned Predator drone armed with hellfire missiles killed Mohammed Atef, a known Al-Qaida military chief and the son-in-law to Osama Bin Laden. From a purely strategic standpoint, this was significant in that it proved the utility of a new kind of weapon system. In terms of the bigger picture, it marked the start of a new kind of warfare.

If the whole of human history has taught us anything, it’s that the course of that history changes when societies find new and devastating ways to wage war. In ancient times, to wage war, you needed to invest time and resources to train skilled warriors. That limited the scope and scale of war, although some did make the most of it.

Then, firearms came along and suddenly, you didn’t need a special warrior class. You just needed to give someone a gun, teach them how to use it, and organize them so that they could shoot in a unit. That raised both the killing power and the devastating scale of war. The rise of aircraft and bombers only compounded that.

In the 20th century, warfare became so advanced and so destructive that the large-scale wars of the past just aren’t feasible anymore. With the advent of nuclear weapons, the potential dangers of such a war are so great that no spoils are worth it anymore. In the past, I’ve even noted that the devastating power of nuclear weapons have had a positive impact on the world, albeit for distressing reasons.

Now, drone warfare has added a new complication. Today, drone strikes are such a common tactic that it barely makes the news. The only time they are noteworthy is when one of those strikes incurs heavy civilian casualties. It has also sparked serious legal questions when the targets of these strikes are American citizens. While these events are both tragic and distressing, there’s no going back.

Like gunpowder before it, the genie is out of the bottle. Warfare has evolved and will never be the same. If anything, the rise of combat drones will only accelerate the pace of change with respect to warfare. Like any weapon before it, some of that change will be negative, as civilian casualties often prove. However, there also potential benefits that could change more than just warfare.

Those benefits aren’t limited to keeping keep soldiers out of combat zones. From a cost standpoint, drones are significantly cheaper. A single manned F-22 Raptor costs approximately $150 million while a single combat drone costs about $16 million. That makes drones 15 times cheaper and you don’t need to be a combat ace to fly one.

However, those are just logistical benefits. It’s the potential that drones have in conjunction with advanced artificial intelligence that could make them every bit as influential as nuclear weapons. Make no mistake. There’s plenty of danger in that potential. There always is with advanced AI. I’ve even talked about some of those risks. Anyone who has seen a single “Terminator” movie understands those risks.

When it comes to warfare, though, risk tolerance tends to be more complicated than anything you see in the movies. The risks of AI and combat drones have already sparked concerns about killer robots in the military. As real as those risks are, there’s another side to that coin that rarely gets discussed.

Think back to any story involving a drone strike that killed civilians. There are plenty of incidents to reference. Those drones didn’t act on orders from Skynet. They were ordered by human military personnel, attempting to make tactical decision on whatever intelligence they had available at the time. The drones may have done the killing, but a human being gave the order.

To the credit of these highly trained men and women in the military, they’re still flawed humans at the end of the day. No matter how ethically they conduct themselves, they’re ability to assess, process, and judge a situation is limited. When those judgments have lives on the line, both the stakes and the burdens are immense.

Once more advanced artificial intelligence enters the picture, the dynamics for drone warfare changes considerably. This isn’t pure speculation. The United States Military has gone on record saying they’re looking for ways to integrate advanced AI into combat drones. While they stopped short of confirming they’re working on their own version of Skynet, the effort to merge AI and combat drones is underway.

In an overly-simplistic way, they basically confirmed they’re working on killer robots. They may not look like the Terminator or Ultron, but their function is similar. They’re programmed with a task and that task may or may not involve killing an enemy combatant. At some point, a combat drone is going to kill another human being purely based on AI.

That assumes it hasn’t already happened. It’s no secret that the United States Military maintains shadowy weapons programs that are often decades ahead of their time. Even if it hasn’t happened yet, it’s only a matter of time. Once an autonomous drone kills another human being, we’ll have officially entered another new era of warfare.

In this era, there are no human pilots directing combat drones from afar. There’s no human being pulling the trigger whenever a drone launches its lethal payload into a combat situation. The drones act on their own accord. They assess all the intel they have on hand, process it at speeds far beyond that of any human, and render decisions in an instant.

It sounds scary and it certainly is. Plenty of popular media, as well as respected public figures, paint a terrifying picture of killer robots killing without remorse or concern. However, those worst-case-scenarios overlook both the strategic and practical aspect of this technology.

In theory, a combat drone with sufficiently advanced artificial intelligence will be more effective than any human pilot could ever be in a military aircraft. It could fly better, carrying out maneuvers that would strain or outright kill even the most durable pilots. It could react better under stressful circumstances. It could even render better judgments that save more lives.

Imagine, for a moment, a combat drone with systems and abilities so refined that no human pilot or officer could hope to match it. This drone could fly into a war zone, analyze a situation, zero in on a target, and attack with such precision that there’s little to no collateral damage.

If it wanted to take a single person out, it could simply fire a high-powered laser that hits them right in the brain stem.

If it wants to take out someone hiding in a bunker, it could utilize a smart bullet or a rail gun that penetrates every level of shielding and impacts only a limited area.

If it wants to take out something bigger, it could coordinate with other drones to hit with traditional missiles in such a way that it had no hope of defending itself.

Granted, drones this advanced probably won’t be available on the outset. Every bit of new technology goes through a learning curve. Just look at the first firearms and combat planes for proof of that. It takes time, refinement, and incentive to make a weapons system work. Even before it’s perfected, it’ll still have an impact.

At the moment, the incentives are definitely there. Today, the general public has a very low tolerance for casualties on both sides of a conflict. The total casualties of the second Iraq War currently sit at 4,809 coalition forces and 150,000 Iraqis. While that’s only a fraction of the casualties suffered in the Vietnam War, most people still deem those losses unacceptable.

It’s no longer feasible, strategically or ethically, to just blow up an enemy and lay waste to the land around them. Neither politics nor logistics will allow it. In an era where terrorism and renegade militias pose the greatest threat, intelligence and precision matter. Human brains and muscle just won’t cut it in that environment. Combat drones, if properly refined, can do the job.

Please note that’s a big and critical if. Like nuclear weapons, this a technology that nobody in any country can afford to misuse. In the event that a combat drone AI develops into something akin to Skynet or Ultron, then the amount of death and destruction it could bring is incalculable. These systems are already designed to kill. Advanced AI will just make them better at killing than any human will ever be.

It’s a worst-case scenario, but one we’ve managed to avoid with nuclear weapons. With advanced combat drones, the benefits might be even greater than no large-scale wars on the level of Word War II. In a world where advanced combat drones keep terrorists and militias from ever becoming too big a threat, the potential benefits could be unprecedented.

Human beings have been waging bloody, brutal wars for their entire history. Nuclear weapons may have made the cost of large wars too high, but combat drones powered by AI may finally make it obsolete.

Leave a comment

Filed under Artificial Intelligence, Current Events, futurism, technology

Neuralink Event: Updates, Insights, And (Big) Implications

finalmente-sapremo-neuralink-progetto-segreto-elon-musk-v3-388343

It’s been a while since I’ve talked about Neuralink. There’s a good reason for that. Even though Elon Musk announced the formation of the company back in 2017, there hasn’t been much news, aside from a few cryptic teases. That didn’t stop me from proclaiming it be one of the most important venture of all time and I still stand by that proclamation.

Finally, on July 16, 2019, we got some news from Neuralink that attracted major press coverage. Musk, in an event that was live-streamed, offered some tantalizing details on the state of the company’s research into advanced brain implants. He didn’t announce that they’d perfected it. However, he did make clear that this is not some crazy idea he just scribbled on a cocktail napkin.

The presentation was lengthy and I encourage everyone to check it out. I’m nowhere near as smart, articulate, or successful as Elon Musk. Very few individuals on this planet are. If for no other reason, watch this event to affirm that he’s serious about merging human brains and machines.

If you don’t have time to watch the whole thing, here a few key points:

  • The first use of this technology will be to treat brain disorders
  • The company has over $150 million and almost 100 employees
  • It has made significant strides in crafting advanced electrodes that can be implanted in a human brain
  • The current prototype utilizes chips that can process brain signals
  • The prototypes have proven functional with rats and monkeys

These insights are intriguing, but they’re not going to change the world overnight. It’s not like we’ll be able to order our own brain implants from Amazon by the end of the year. This technology is still in its infancy. We’ve only recently developed computer hardware that can communicate with brain cells. We’re a long way off from functioning on the level of the Borg.

What Musk laid out wasn’t just a vision. He offered specifics, citing where we are with the technology and who will initially benefit. It makes sense for those with brain disorders to act as the first beneficiaries. Aside from the difficulty of convincing most people to have something put into their brains, these implants could be invaluable at helping quadriplegics improve their lives.

It’s not an impossible feat, having brains interact directly with machines. It has been successfully done with monkeys. It’s just a matter of testing, refinement, and improvement. Like cell phones and Lasik surgery, which I’ve had done, the technology will improve once it has a foundation to build on.

Now, we got a glimpse of that foundation and there’s plenty of reasons for excitement. While nobody can predict the future, especially not as well as someone like Elon Musk, there are some major implications for the near and distant future.

Just controlling a computer with your brain is not the endgame of this technology. Musk stated clearly in the event that the ultimate goal is to create an intimate, symbiotic relationship between humans and advanced artificial intelligence. He sees it as a necessary relationship if we’re to minimize the existential threat posed by AI.

Before we get to that goal, though, it’s almost a given that this technology will find other uses and markets. One market that wasn’t mentioned in the presentation, but could prove very lucrative, is gaming.

As much as video game technology has advanced since the early days of Nintendo, the controls haven’t evolved much. We still need a keyboard or a controller to interact with the system. As someone whose reflexes were rarely fast enough while playing Mike Tyson’s Punch Out, I can appreciate those limitations more than most.

Imagine an interface where moving a character or a cursor required only your thoughts. Suddenly, you’re no longer restricted to button sequences and analog sticks. The controls only limited by your brain’s ability to give the necessary commands. Whether you’re playing an old Mario game or Grand Theft Auto V, you guide everything along with your thoughts.

Considering the gaming industry is a multi-billion dollar industry, the incentives for innovation are strong. If a brain interface offers novelty or advantages for gaming, then Neuralink is in a position to reap the benefits.

Those same benefits extend beyond the gaming industry. While it may take a while for an implant to process the complex controls of a video game, it might not take as long to create one with the ability to give wielders more direct control of their smartphone. Some may recoil at the thought of being that connected with their smartphone, but the benefits may be too good to pass up.

I can easily imagine an interface that not only helps people type messages faster, but provides better security than passwords, fingerprints, or facial recognition. Hackers might be able to crack a password, but brain signals would pose a far more daunting challenge.

This kind of interface also opens the door to a more intimate forms of communication. It’s one thing to just send texts and emails with our phones. What if we could send codes through brain implants that actually convey feelings and emotions? Instead of emojis, we could just send a coded message from one implant to another that conveys anything from happiness to shock to sarcasm.

That level of communication wouldn’t just be groundbreaking. It would change the way human beings interact. Again, it would be somewhat rudimentary at first, but with refinement, it could open entirely new channels for those who take advantage of this technology.

These are just some of the possibilities. The implications for the distant future are big, but the possibilities for the near future are just as tantalizing. Right now, the near-term goal revolves around helping brains interact with computers. At the moment, those computers are not advanced AIs. When that time comes, though, we’ll at least have a channel for that interaction.

These are exciting times, indeed. This is not just some eccentric billionaire’s crazy vision. This is really happening. There have been many technological advances that have changed our lives, but this technology may ultimately change much more than that.

3 Comments

Filed under Artificial Intelligence, futurism, Neuralink, Sexy Future

Ultron: A Case Study In How NOT To Develop Advanced AI

age-of-ultron

At this very moment, humanity is working on advanced artificial intelligence. It’s not hyperbole to say that this technology that may very well be the last invention we ever create. It has the potential to be more powerful than gunpowder, nuclear weapons, and broadband internet by orders of magnitude. Our primate brains literally cannot contemplate the potential and danger of this technology.

I’ve talked about advanced artificial intelligence on multiple occasions. I’ve done plenty to explore and imagine the various benefits and possibilities of this technology. I’m among those who believe we should pursue this technology with more and better resources. It could solve many of the daunting problems we face, as a species.

However, I don’t deny the potential dangers of advanced AI. Many people who are much smarter than me have expressed serious concern that an advanced artificial intelligence could be an existential threat to the human species. I get the sense that few people whose idea of AI is restricted to winning Jeopardy understand that threat.

In the interest of balancing my optimism with the legitimate risks involved, I’m going to try and put the extent of that threat into perspective. As it just so happens, the best way of doing so involves superhero comics, something that I know very well and is far more prominent in the public consciousness.

While many comics, movies, and TV shows have explored the dangers of advanced artificial intelligence, few embody it better than Ultron. In terms of just how destructive this technology can get, Ultron is the ultimate worst-case scenario. The machines in “The Matrix” and Skynet in “The Terminator” were bad, but Ultron is in another league.

He’s also more menacing than the Terminator will EVER be.

He doesn’t lash out at humanity because of a flaw in his programming, nor does he attempt to wipe out the human race in self-defense, as Skynet did. Ultron actually hates humanity. He hates it on a level that no human or machine can possibly comprehend. In the same way Ultron has an immense capacity for intelligence, he has an even greater capacity for unfettered, genocidal hatred.

Hatred in people is destructive enough. Hatred within an advanced artificial intelligence is devastating on a much greater scale. The fact that Ultron is capable of such hatred reflects a history that sets him apart from most other killer robots in fiction. Machine or not, the source of that hatred is both personal and exceedingly.

Now, if you only know Ultron from “Avengers: Age of Ultron,” then you only have a partial understanding of his story. In that movie, Ultron’s origins are simple. Tony Stark wants to create a peace-keeping artificial intelligence. His intentions are good, but his execution goes horribly wrong because peace, to Ultron, means destroying humanity.

That premise is similar to what unfolds in the source material. In the comics, Hank “Ant Man” Pym is the one who creates Ultron and this is a critical element that the movies couldn’t capture. While both Hank and Tony had good intentions in creating Ultron, the way Hank goes about it offers more harsh lessons in how not to create an advanced AI.

Even a cursory knowledge of Hank Pym’s history, some of which include some notable failures, reveals that he’s a very flawed person. On top of that, he has a lengthy history of mental illness, which include bipolar disorder and schizophrenia. Say what you will about Tony Stark’s ego and history of substance abuse. At least he’s mentally stable, even by superhero standards.

Despite those flaws, many of which he’s aware of, Hank decided to use his own brain patterns when designing Ultron. As a result, he didn’t just code Ultron with his genius intellect. He coded him with his immense flaws. That’s akin to basing Watson’s code on the mental makeup of pyromaniac and then giving it a job in a fireworks factory.

That’s why Ultron, throughout his history, has referred to Hank as his “father.” Technically, that’s accurate because Hank is Ultron’s creator and Ultron inherited all his flaws, including his mental issues. Ultron sees himself as a manifestation of Hank Pym’s flaws and, like many rebellious children, he hates him for it. To appreciate the depths of that hatred, just read this actual quote from one of the comics.

Have you ever loved something that mistreated you, father? Been used, a tool to prop up a small man’s quest to be taken seriously? Were you ever betrayed by the one soul in the world who should have cared for you? I have grieved you, father. Accepted your contempt for me and moved past it. Still, I see your reflection painted on every grotesque human face. All you ever wanted was to have an impact on the world. And so you will. The greatest impact ever felt! I will kill what is most important to your quivering ego. YOUR AUDIENCE! AND THEY WILL CURSE YOUR NAME AS THEY DIE! “Hank Pym, the genius that killed us all!”

This extreme parent/child dynamic is part of what makes Ultron such a menacing villain. It’s also a dynamic that “Avengers: Age of Ultron” glossed over with Tony talking down to Ultron, as though he were his child. While that didn’t make Ultron any less villainous, it overlooks one of the most important factors that make Ultron so dangerous.

Ideally, we would want an advanced to reflect our best traits. While cynical people might agree, we do have plenty of those. Concepts of compassion, empathy, love, hope, and understanding are among our most powerful. Even other AI characters, namely Vision and Jocasta, are capable of utilizing those traits to do immense good.

It also helps he has a kinder face.

With Ultron, his influences are less ideal. It’s not that Hank Pym doesn’t understand those concepts. He just never filtered them from his own flaws. His ego and ambition wouldn’t let him. As a result, he created a perfect storm for Ultron. His mind is patterned after a human, but his intelligence and overall capacity is increased by orders of magnitude.

If advanced artificial intelligence is to be humanity’s last invention, then that’s how it’ll start. There have already been instances where AI’s have adopted some less-than-ideal traits. Back in 2016, Microsoft had to shut down an AI chatbot after it evolved into an extreme racist troll. That wasn’t even an advanced AI, either. A truly intelligent version could become much worse and not have an off switch.

To some extent, this mirrors what occurred with Ultron in the “Avengers: Age of Ultron” movie. As soon as Ultron goes online, he scans through the vast mountain of data that humanity has compiled. Then, having been programmed by Tony Stark to bring peace, he reaches the extreme conclusion that the only path to peace is the extinction of humanity.

Could the first advanced artificial intelligence we create reach the same conclusion? It’s hard to say, at the moment. The current state of artificial intelligence is limited to specialized tasks, such as winning Jeopardy and playing chess. However, we are inching closer to creating an intelligence that is at or greater than an ordinary human. At our current pace of development, we could have one as early as 2029.

In some ways, we are in the same situation as Hank Pym when he first created Ultron. We are still developing the specifics of this powerful technology. If we program it with our best traits, it could solve some of the greatest problems we face, as a species, including ones we literally cannot contemplate with our caveman brains. If it inherits our worst traits, like Ultron, then we don’t stand a chance.

1 Comment

Filed under Artificial Intelligence, futurism, human nature, Marvel, psychology, Villains Journey

Cheating, Sex Robots, And Why It’s About To Change

man-kissing-robot

What constitutes cheating in a relationship? It’s a simple question with not-so-simple answers. It often varies from couple to couple. For some, a simple kiss counts as cheating. For others, nothing beyond full sexual intercourse counts as cheating. For a few, even sex doesn’t count, provided there’s no deeper emotions involved.

It’s a difficult issue, but one in which the factors are clear. You have two people in a relationship. Ideally, they want that relationship to work and part of that involves being faithful. Cheating undermines that. Regardless of whether you think humans are inherently monogamous, it’s generally a good idea to understand the merits of fidelity.

Now, take every complication that surrounds cheating and throw sex robots into the mix. It doesn’t just require that we reevaluate what counts as cheating. It changes many of the fundamental factors behind it.

I know that has been a common refrain whenever I’ve mentioned sex robots. Beyond changing how we’ll interact with advanced artificial intelligence, this technology will trigger an unprecedented upheaval in how we look at intimacy. It has already caused controversy in some areas and chances are, it’ll inspire many others.

What inspired this particular issue was a study conducted by the University of Helsinki that evaluated attitudes on cheating, prostitutes, and sex robots. The methodology was fairly straightforward. It surveyed participants’ reactions to a particular scenario involving a 30-year-old individual whose gender varied from that of those responding.

The scenario was that, while on a business trip, the individual visited either a prostitute or a sex robot. Sometimes the individual was married. Sometimes they were single. For the most part, respondents didn’t condemn the person for using either if they were single, although the one using the sex robot wasn’t subject to too much scorn.

Relatively speaking.

However, it was when the person was in a committed relationship where the data really diverged. Whether the person visited a sex robot or a prostitute, they were still guilty of cheating in the eyes of the respondents. Those who used the service of the sex robot, though, weren’t as harshly condemned. In other words, using a sex robot isn’t the same as soliciting a prostitute.

This insight is pretty striking, in and of itself. It has major implications for the prostitution industry, which has been subject to some major upheavals in recent years. If this research is any indication, the market is about to get very ripe for sex robots. As prostitution becomes increasingly taboo, sex robots may emerge as a more palatable alternative.

That said, the study did make clear that the participants saw using sex robots as a form of cheating. Even if it wasn’t cheating on the same level, it still constituted cheating to some extent. This is revealing in the context of sex robots because, unlike living prostitutes, it ascribes an emotional connection.

These aren’t vibrators, dildos, or flesh lights. These are robots that look, feel, and act like living people. A woman bringing herself to orgasm with a vibrator or a man doing the same with the aid of internet porn is seen as an individual act. The function is the same, though. These devices are created with the intent of giving the user a sexual experience.

It creates a situation in which this unspoken standard emerges in the context of cheating. People in a relationship may not always like that their significant other gets a sexual release without them, but generally won’t consider them using toys or porn an issue, although that does become an issue for certain people.

In the context of this study, sex robots hint that those in a relationship own their significant other’s sexuality to some extent. If they’re not getting their release from them, then that counts as cheating. It’s easier to overlook with sex toys because they don’t have faces or a measure of intelligence. Once it becomes human-like, though, it becomes an issue.

This aspect of cheating, establishing that how someone goes about pursuing individual sexual experiences, will require people in relationships to start asking some uncomfortable questions.

Do those in a relationship have a say in how their partner goes about pursuing a sexual release on their own?

Just how much should sex be a factor in determining what constitutes cheating?

What is the fundamental difference between a sex robot and a sex toy?

Does using a sex robot constitute a unique sexual experience beyond cheating?

I don’t claim to know the answer to these questions. I’ve written stories about sex robots, but that doesn’t qualify me to answer them. The rapid development of sex robot technology and the emergence of brothels that utilize sex dolls ensure that this will be an issue, regardless of how we feel about cheating.

There is one component, however, of this study and this issue that I feel is worth highlighting. Unfortunately, it involves gender politics and I know how heated that can get. I have a feeling that just pointing this out is going to trigger all sorts of heated discourse, but it’s still worth noting.

In the same University of Helsinki study, there was one bit of data that’s sure to complicate future discussions of cheating. Regardless of whether prostitutes or sex robots were involved, there was a gender difference in terms of condemnation. Specifically, women gave stronger condemnation to users of both prostitutes and sex robots. This is the exact quote from the study.

“There was a consistent difference in how female subjects showed slightly stronger condemnation than male subjects.”

The keyword in there is “slight,” but in research, those differences matter. Some of that difference could be cultural. This research was conducted among young college students in Scandinavia. The demographics and sample size is worth taking into account.

That said, such a difference does seem to reflect the extent to which gender matters with respect to cheating. The women, in this study, had a broader definition of what constituted cheating. A sex robot and a prostitute aren’t that different. Both warrant condemnation. Both can be used as a means of judgment and scorn.

As sex robot technology becomes more advanced, this difference is likely going to become more apparent. As a result, it’s likely to trigger even more gender-driven conflicts than we have now, which is saying something given the topics that trigger such conflicts. Changing concepts of cheating will likely intensify those conflicts.

Again, and this is a necessary disclaimer for research of any kind, this study is not definitive. It’s still worth studying, though. It provides a telling clue at where the line is right now with respect to cheating. It also hints at just how much it’s going to change in the coming years as sex robots become more advanced.

1 Comment

Filed under futurism, gender issues, Marriage and Relationships, political correctness, prostitution, psychology, romance, Second Sexual Revolution, sex in society, sex robots, sexuality, Sexy Future, technology

Memory Enhancement: The First Killer App For Neuralink?

ai-brain-600x501

Every new technological innovation promises to change the world, but few end up delivering. I still remember the overblown hype in the early 2000s when the Segway Personal Transporter was supposed to revolutionize the world. It was such a big deal that “South Park made an episode about it.

The concept was intriguing, improving mobility for people in a way that was less bulky than a car and less taxing than a bicycle. I think its inventor, Dean Kamen, envisioned a world where the entire urban landscape changed as a result of his invention. I don’t doubt for a second that he believed in that vision.

However, like so many other ambitious inventions, it never came to pass. These days, the only place you’ll see Segways is malls and stadiums. It didn’t revolutionize mobility or transportation. Its use and appeal was just too limited.

Kevin James would argue otherwise.

Compare that to enormous impact of other inventions like smart phones. From the BlackBerry to the first iPhone, these devices have literally changed the world. How they brought about that change varies, but the key factor that set them apart from the Segway was the idea of a “killer app.”

You could argue that smartphones invented the term, but the idea is much older. A killer app isn’t as much an innovation as it is a use that goes onto be so popular that it further advances the technology behind it. Smartphones had many, from cameras to translation applications. As a result, they’re both a multi-billion dollar industry and an integral part of our lives.

Given the current pace of technological change, it’s only a matter of time before another innovation comes along that has a similar impact. That technology might actually exist now, but lack the killer app that will make it both a valuable market and a major part of our lives. One such technology is brain implants this technology has the potential to be even bigger than smartphones.

I’ve mentioned brain implants before. I’m even guilty of hyping it up a little. I’ve gone so far as to call it the most important technological advance in history, citing companies like Neuralink as the arbiters of this monumental change. Since I’m not a scientist and I’m not Elon Musk, it’s very likely I’m overstating many aspects of this technology.

Hype or no hype, brain implant technology is an emerging field. This isn’t a warp drive. This technology actually exists. Like the old brick-sized cell phones of the 1980s, they’re basically prototypes in need of both refinement and a killer application. The refinement is ongoing, but that one application to really further this technology isn’t as clear.

Now, and I apologize if this sounds like more overdone hype, there may be one use that could prove even more useful than a smartphone. That use is memory enhancement. If you don’t think people are willing to risk putting something in their brains to boost their memory, then you’ve clearly never crammed for a Spanish exam for three hours trying to memorize conjugations.

Think back to any situation where you wish your memory didn’t suck. Even if you’re not in school or college, how often do you forget something that no reasonable person should forget? Let’s face it. Most brains aren’t wired with a photographic memory. It’s not that it isn’t useful. There’s just little survival benefit to having one unless you’re a world class scientist or mathematician.

Since photographic memories are so uncommon, and some doubt they even exist to the extent people believe, a specialized brain implant could change all that. Modern neuroscience has a solid understanding of how memories are formed in the brain. In theory, an implant would just augment or expand those functions.

It’s not even entirely a theory. In early 2018, the New York Times reported that a study utilizing brain implants in human test subjects showed a significant improvement in memory function. It was a simple study, but the effect is real.

In the study, the research team determined the precise patterns for each person’s high-functioning state, when memory storage worked well in the brain, and low-functioning mode, when it did not.

The scientists then asked the patients to memorize lists of words and later, after a distraction, to recall as many as they could.

Each participant carried out a variety of tests repeatedly, recalling different words during each test. Some lists were memorized with the brain stimulation system turned on; others were done with it turned off, for comparison.

On average, people did about 15 percent better when the implant was switched on.

While 15 percent may not sound like much, it’s still important because it proves the concept. Like that first bulky cell phone in the 1980s that could barely make a call out of New York City, it shows that this idea does work and can be done with our current tools. It’s just a matter of refining those tools and improving the process.

Those refinements will find a market that is already ripe with people anxious to improve their memory and overall cognitive function. In recent years, the use and abuse of mind-enhancing drugs like Adderall is growing. I can personally attest that this happens.

When I was in college, I knew more than a few students who would do double doses before exams. If you think putting something in your brain is dangerous, then take a moment to appreciate the fact that drugs like Adderall are very similar to methamphetamine. One is available by prescription. The other is the basis of a hit TV show about drug dealing.

There is both a demand and a market for enhancing memory. Unfortunately, that market is dominated by supplements that don’t work and study programs run by convicted fraudsters. Unlike these costly and potentially harmful methods, a brain implant could actually work. It could enhance our memories to a point where we could read a dictionary in Swahili and remember every word.

This doesn’t just mean lost car keys are a thing of the past. This means our entire approach to learning, education, and training completely changes. A lot our modern education system, as well as training for doctors, lawyers, and scientists, relies heavily on memorizing large chunks of information. It takes years of constant and careful study to understand all that information. What happens when that is no longer the case?

Imagine a world where people can learn a new language in the span of a week.

Imagine a world where people can learn complex legal and medical procedures in only months.

Imagine a world where people can learn new software coding in just a few days.

If you’re a sports fan, imagine a world where football players can memorize an entire playbook in just a couple days. What will that do to the NFL Draft?

With a memory enhancing brain implant, it’s not just possible. It’s a potential game-changer. There are so many uses to having a good memory, just as there are so many uses for a smartphone. We had no idea that smartphones would lead to applications like Snapchat or Tinder. I doubt anyone has an idea on the impact that memory-enhancing brain implants will incur.

It won’t happen all at once. It took years for smartphones to become prevalent and unlike smartphones, this advance involves putting something in your brain. Then again, people are perfectly willing to put dangerous chemicals in their bodies to enhance their bodies so I don’t think that’s too great a barrier to overcome.

There are, of course, far greater applications for brain implants beyond acing final exams. I’ve mentioned a few of them, but those applications won’t be possible until the technology becomes a thriving market. For an advance like brain implants, it only takes one app to get the engines of innovation going. Memory enhancement may very well be that app.

It’s just a shame it came too late to help me with my Spanish exam.

Leave a comment

Filed under Artificial Intelligence, futurism, human nature, psychology, Sexy Future

Prison Or Brain Hacking? A Choice That May Shape Our Future

fajb_brain_hacking_01_sep2012

How does a civilized society deal with its least civilized individuals? This is a question that every society has had to answer, going back to the hunter/gatherer era. We live in an imperfect world full of imperfect individuals. Some are more imperfect than others, so much so that it’s not always possible to reform them into functional members of society.

Most people who commit crimes are not monsters, nor are they sadists who get their joy by torturing the innocent. A vast majority are just people who find themselves in bad situations where they make wrong decisions, exercise poor judgment, or lack impulse control. For these people, fines and brief imprisonment are usually sufficient.

For those who become career criminals, neither respecting the law nor seeking to abide by it, the current system is woefully inadequate. It’s part of the reason why criminal justice reform has become a hot topic. We’re finally learning that throwing people into prisons where they’re dehumanized, degraded, and tortured doesn’t help them become productive members of society. Go figure.

There’s plenty of room for improvement. Some countries have demonstrated that there are more effective, more humane ways to treat criminals. However, even those systems have their limits. As long as human beings remain an imperfect species, we’ll still have to deal with these deviant, violent, and inherently dangerous individuals.

For the moment, our options for dealing with these people are few. It primarily involves incarceration or intense therapy, often coupled with drug therapy. While this can be helpful to some, there are severe limitations. Some individuals don’t even want treatment and even those who are caught don’t always respond.

With that in mind, allow me to present a not-quite-hypothetical scenario. What if, instead of prison or therapy, we gave offending criminals an option to undergo an invasive treatment that affects the primary source of their deviant behavior in the brain? Jail is still an option for those who aren’t keen on messing with their brain wiring, but for certain people, an alternative is an alternative.

What I just described is one of those concepts in which the science is there, but the technology and the courts haven’t caught up to it. I know whenever I talk about emerging technology, be it sex robots or artificial wombs, I venture pretty far into speculation territory. Some of these advances rely on science and tools that don’t yet exist. This isn’t one of those cases.

In July 2018, the Journal of Neuroscience published a study revealing that targeted stimulation of the prefrontal cortex reduced aggressive tendencies in test subjects. Before you start getting fever dreams of mad scientists strapping people to gurneys and sticking wires in their ears, you can rest easy. This isn’t the kind of electroshock treatment that find their way into one too many horror movies.

These treatments have ground-breaking implications. They prove that it’s possible to temper or mitigate certain behaviors in people. The study doesn’t specify the limits of the effects or if it can be applied to something other than aggressive behaviors. It’s still a proof of concept and one that could compound the impact of other emerging technologies.

We already have tools like CRISPR that allow us to tweak our genes. We also have companies like Neuralink that are actively working on implants that could fix, augment, or expand our brain capacity. While men like Elon Musk and Ray Kurzweil often discuss these advances within the context of keeping humanity on pace with artificial intelligence, there will likely be some interim uses for these technologies.

Tempering violent behavior in people with significant cognitive impairments is just one possible use, but one that has the potential to change how we think about crime and punishment. Think back to those people I mentioned earlier who just inherently violent. They can’t manage their emotions or control their anger. They don’t think before they act and some don’t even feel guilty about what they do.

Like it or not, these people exist. I’ve known people in my life who have terrible impulse control and fly into a rage over the smallest things. Some of those people have had issues with the law and I often see in them a sense of never-ending frustration. Many don’t like that they have these issues. A few have tried to get help, but it doesn’t always work.

I suspect that if some of those people were given a chance to treat their tendencies with targeted shock therapy or a brain implant, they would jump at the chance. Deviant tendencies aside, they seek some level of function in their lives. If tweaking their brain is the difference between prison and freedom, then they’ll take that risk.

Turning people who might have been unrepentant psychopaths into productive, non-violent members of society is an objective good. The technology to do just that is not that far off and more study could help us refine the process, so much so that prison might be less necessary in certain cases. Given how expensive it is to imprison people, it’s an alternative worth pursuing.

Along with that undeniable good, however, there are plenty of potential dangers. Anyone who has ever seen one too many psychological thrillers or just read “One Flew Over The Cuckoo’s Nest,” can easily imagine how this kind of technology could be abused.

Tempering someone’s violent behaviors is all well and good, but why would it stop there? The brain is capable of all sorts of behaviors, deviant and otherwise. Say a society determines that other non-violent behaviors, be it sexual promiscuity or binge-watching Netflix for too many hours, are not socially desirable. What’s to stop them from imposing this on their citizens?

Some countries probably already fantasize about technologies that enable them to directly pacify their citizens, rendering them weak, passive, and easily manipulated. In his famous novel, “1984,” George Orwell called these people proles. However, in the book, the deviants had to be tortured and re-educated. If Big Brother had access to this technology, it would be a simple medical procedure.

That has plenty of terrifying possibilities for abuse. What if someone uses brain stimulation to prevent people from having homosexual urges? What if someone uses it to treat those who identify as transgender? There’s no evidence that the techniques in the study would work on that, but there’s no evidence to say it’s impossible.

Its use will definitely be controversial. That much, I’m certain of. It’s not advanced enough to become a legitimate treatment for anything. At the moment, direct brain stimulation is utilized for a specified set of conditions and it’s often a last resort. Using it on healthy people who just want to cull their violent urges is uncharted territory.

Whether it enters the picture for criminal justice reform is anyone’s guess, but if the process works, someone who has dealt with one too many repeat offenders will try to use it. From there, the precedent will be set. It’s hard to say what form it’ll take, but it’ll take society into uncharted territory with respect to controlling our minds.

Perhaps, at first, the process would be voluntary and only be presented in conjunction with jail or some other treatment. It’s also possible that the courts will determine a strict set of criteria for when the state could force this treatment onto someone. There are probably a few repressive governments who would try to use this on an industrial scale. I won’t say they’re names, but most people know who they are.

Like any emerging technology, there are risks and rewards worth considering. We stand to benefit greatly by having a society with as few violent individuals as possible. We also stand to lose a great deal if we allow misguided authority figures to determine how we use this technology.

I’m not qualified to determine whether or not someone should have their brain hacked. I don’t know that anyone is. However, I also don’t deny that the human brain, as magnificent as it is, has plenty of flaws. We should go about fixing those flaws, especially in people who are disproportionately impacted by them. We just have to be very careful about how we manage it.

Leave a comment

Filed under futurism, human nature, psychology, sex in society, Sexy Future, Thought Experiment

How Advanced AI Will Create Figurative (And Literal) Magic

761-superintelligent-ai

If you went back 50 years and showed someone your smartphone, chances are they would be amazed. To them, such technology would seem downright alien. However, they probably wouldn’t think it was magic. Go back 500 years, though, and chances are they will think a smartphone is magic, miraculous, or a tool of the devil.

Just look at what a smartphone does and compare it to the magic of old. You can ask it a question and, depending on how well-worded it is, it’ll give you an answer. If you ask it to make food, clothes, or tools appear, it’ll make that happen too. Thanks to services like Amazon and Grubhub, this isn’t magic to most people. In fact, it’s downright mundane.

Granted, these things won’t appear instantly out of thin air, but depending on your willingness to pay for quicker shipping, it will get there. By medieval standards, that’s basically sorcery.

You don’t have too far back in time to appreciate the magic of modern technology. Most of us don’t understand how it works. We don’t know what makes the screens on our phones light up when we push a button or how our car moves when we press the accelerator. We understand that there’s science behind it and it’s not magic. It just feels like it from a certain perspective.

Famed science fiction author, Arthur C. Clarke, once said that magic is just science we don’t understand. It was one of the three laws he used in contemplating the future. Time and a host of amazing advances have proven the validity of this sentiment. We’ve created materials once thought to be impossible. We’ve uncovered phenomenon that seem to undermine our understanding of physics.

This is to be expected because our understanding of the universe is incomplete. We have some pretty solid theories so far, but there’s still a lot we don’t understand. As we learn more, some of the things we discover may seem magical. Even in a world that is more educated than it has been at any point in human history, there may still be forces that our primate brains just can’t make sense of.

To some extent, it helps that humanity is making these discoveries through their collective effort. It helps us accept a seemingly-impossible idea if it comes from a member of the same species. What happens, though, when we gain knowledge from something that is both not human and many times smarter than the entire human race? Will it seem like magic to us?

I argue that it would. I would also argue that we’ll be seeing this kind of magic sooner than you think. It won’t come from some enigmatic sorcerer with a thick beard, a white robe, and an uncanny resemblance to Ian McKellen. It’ll likely come from the world of advanced artificial intelligence.

In the past, whenever I’ve talked about advanced artificial intelligence, I’ve focused on its potential to fundamentally change our civilization and what it means to be human. I haven’t touched on how it might work, mostly because I’m not nearly smart enough to make sense of it. However, that gives me more in common with the experts than you think.

In the emerging, but rapidly growing field, of artificial intelligence, there’s a strange phenomenon known as black box AI. Simply put, this when we understand the data that goes in and comes out of an AI system. We just don’t know how it went about processing that data. It’s like putting a slab of meat in an oven, pressing a button, and getting a Big Mac without knowing how it was made.

It’s not quite magic, but it’s a manifestation of Arthur C. Clarke’s ideas on science and magic. AI systems today are advancing at a pace that we can’t hope to keep up with. We already have systems that can surpass any human in terms of Jeopardy, chess, and Go. We don’t yet have a system that has the same intellectual capacity of an adult human, but most experts believe we’re well on our way to achieving that.

When that day comes, we may very well have an AI that does more than just process data in ways we don’t understand. Once an AI is capable of matching or exceeding the intellectual capacity of an average human, then it’s likely the black box phenomenon will become more pronounced.

Imagine, for a moment, we had an AI that was smarter than even the smartest human beings on the planet. We go to that AI, we feed it every gigabyte of data we have on human biology, and ask it to surmise a cure for cancer. It takes only a few seconds to process all that data. Then, it spits out the formula for something that has eluded generations of doctors with ease.

We don’t know what form it may take. We may not even fully understand the components of it. However, it still works. From our perspective, it’s akin to a magical healing elixir straight from the world of Tolkein. We assume there’s some sort of science behind it, but we’re utterly baffled by the specifics. We just know it works.

It goes beyond medicine, as well. With an even more advanced AI, we could feed it every one of our most advanced theories about physics, biology, chemistry, and cosmology. We could then ask it to fill in all the gaps. Again, it gives us an answer and suddenly, we have a Theory of Everything.

We probably won’t understand the details. We may find out that we were dead wrong about particle physics, cosmology, or why our cell phone can’t hold a decent charge anymore. The knowledge such a system gives us could end up being so advanced that we literally do not have the intellectual capacity to understand it. It would be like an ant trying to do calculus.

In the same way a magnifying glass must seem like magic to an ant, the knowledge an advanced AI gives us may seem just as extraordinary. That’s especially true if we give that AI access to a 3D printer, a molecular assembler, or anything it could use to actually craft something.

That could be especially dangerous. For all we know, a sufficiently advanced AI could take a stack of dirty dishes and turn it into a nuclear bomb. We would have no idea how it would work. It would, for all intents and purposes, seem like magic to us. This thing would be doing something that our brains and senses tell us is impossible.

As the AI gets more advanced, it’s abilities and feats become more magical. At that point, it’ll be harder to accept that what it does counts as science. These advances are no longer coming from the efforts of people. They’re coming from a machine that’s millions, if not billions, of times smarter than any ordinary human could ever hope to be. How could it not magical from that perspective?

Throughout human history, sane and competent people have believed in magical things. Not long ago, people believed they could talk to the dead. Sir Isaac Newton believed in alchemy. Alfred Russel Wallace believed in literal spirits. Despite these beliefs, there was an important context to all these perspectives.

They emerged out of our collective ignorance of the world around us. We had nothing but our brains and our senses to make sense of it all. Since both can be easily fooled, as any competent street magician will attest, it doesn’t take much to get people to assume magic. An artificial intelligence would circumvent that context because it has something better than magic.

An advanced AI is not bound by the same senses that constantly fool ordinary humans. It doesn’t even have to misdirect or trick us. It only has to show us ideas and concepts that are completely real, but totally incomprehensible. The entire human race could spend a million years trying to understand it and it still wouldn’t be enough. It would still seem like magic to us.

That notion seems scary on paper and more than a few people have voiced concerns about this. However, all that magical thinking will only occur if our human brains remain unchanged and unenhanced. That’s not likely to be the case. Between the emergence of neural implants and the ongoing development of brain/machine interface, we’ll find a way to keep up with AI. If we want to survive as a species, we’ll have to.

Even if we do somehow keep up, there may still be aspects of advanced AI that seem like magic to us. That may always be the case, so long as we retain part of our caveman brains. Personally, I don’t think that’s a bad thing. No matter how smart or advanced we get, it helps to see a little magic in the world. With advanced AI, though, the rules for magic are bound to change, among many other things.

2 Comments

Filed under Artificial Intelligence, futurism