Tag Archives: machine learning

Putting The Hype Behind ChatGPT Into Perspective

I’ve been meaning to touch on this topic for a while now. For someone who writes a lot about and makes multiple videos on the subject of artificial intelligence, it might be somewhat surprising that I haven’t talked much about ChatGPT. I promise there’s a reason for that. I don’t claim it’s a good reason, but I think it’s relevant because it has to do with perspective.

Now, I’ve been following the sudden surge in interest surrounding ChatGPT since it started making headlines. I actually became aware of it when I saw this video on YouTube from a channel called Cold Fusion. For reference, here’s the video.

From here, I started following numerous newsfeeds about ChatGPT, how it’s being used, and how people are coming to perceive it. It has been amazing to watch. I honestly can’t remember the last time a piece of software getting this much hype. And the incredible pace of user growth it’s had in the past few months is nothing short of remarkable.

People have been talking about the potential for artificial intelligence for years, myself included. But we’ve never seen that potential manifest beyond a certain point. ChatGPT has changed that because it’s a real, tangible product that ordinary people can use. For an entire generation, it’s likely to be the first interaction with an artificial intelligence that can do more than your typical virtual assistant.

I don’t think it’s hyperbole to say that this technology could change the world in profound ways. It has the potential to radically alter how we work, learn, create, and do business with one another.

At the same time, it has raised a lot of concerns and not just with respect to how it might displace large segments of the job market. There’s genuine concern it’ll facilitate cheating, help scammers, or add to ongoing issues surrounding misinformation. I think those concerns are plenty warranted.

There’s already some major concern that ChatGPT is somehow the precursor to Skynet and we’re actively creating something that will eventually turn against us. Those concerns aren’t quite as warranted.

Let’s get one thing clear. ChatGPT is not an artificial intelligence on par with Skynet or any other fictional AI we’re familiar with. That’s not how it’s programmed. It can’t become Skynet any more than your cell phone can become a dishwasher. The hardware and software just aren’t there yet.

That being said, ChatGPT is a manifestation of how far artificial intelligence has come. This isn’t something that just uses algorithms to link us to new web pages. This is a system that can actually interact with people in a way that feels familiar. Talking to ChatGPT is less like doing a web search and more like talking to another person. That person just happens to be capable of culling through massive amounts of data and presenting it in a useful, legible form.

I admit that’s not a trivial difference. I also don’t doubt that entire industries and tech companies are rightly concerned about what ChatGPT could lead to, with respect to the future of the internet. But that’s where having a balanced perspective really matters.

For me, personally, I look at ChatGPT the same way I look at the first iteration iPhone. If you get a chance, just go back and look at old videos and news stories about the first iPhone. That too was touted as something revolutionary and world-changing. And in the grand scheme of things, it was. But looking at the specs of that first model today, it’s easy to forget how long it took for that impact to take hold.

Granted, that first iPhone was a bit overhyped and most did not see the potential of this device at first. However, that potential was realized more and more over time as people began refining how they used. Then, as later models came out that improved on what the first one did, it really began to have an impact.

I wouldn’t expect ChatGPT to follow the exact same path. For one, this program was developed by a non-profit research laboratory and not some multi-billion dollar tech company. The purpose, intentions, and incentives are all very different with this technology compared to that of the iPhone.

But, like all emerging technology, there will be updates and refinements. Another version of ChatGPT is already being teased. Like the second iPhone, it promises to improve and expand on the function of the first. In time, another version will come out and another one after that. Each time, the use and utility will grow. It won’t happen all at once. It might not even be noticeable at the time. But the impact will be felt in the long run.

That’s probably the most balanced perspective I can offer for ChatGPT at the moment. I don’t doubt for a second that this perspective will change with future updates and capabilities. There’s a chance ChatGPT ends up being a popular fad that simply falls out of favor because nobody can figure out how to utilize it beyond a certain point. It could become the AI equivalent of Windows Vista.

But there’s also a chance that ChatGPT could lead to some truly unprecedented growth and change in the world of artificial intelligence. It could completely upend how we interact with technology. And ultimately, it could lead to the development of a functioning artificial general intelligence capable of matching and exceeding an average human. If that does happen and ChatGPT was the catalyst for it, then it might go down as one of humanity’s most important technological developments.

At this point, I honestly don’t know how it’ll play out. And I question anyone who claims to know. Nobody truly knew how the iPhone would change the world until that change became apparent. We probably won’t know the true extent of ChatGPT’s impact until a similar change takes hold.

Who knows what the world will be like when that time comes?

While a part of me is nervous about it, I’m also genuinely curious to see where ChatGPT will lead us,

3 Comments

Filed under Artificial Intelligence, ChatGPT, technology

Artificial Intelligence Is Struggling With Racism (And The Larger Implications)

There’s no doubt that artificial intelligence will fundamentally change the world. Its potential is so vast that some even say it will be mankind’s last invention. Once it gets to a certain point, it won’t just be better at everything humans do. It may very well supplant humanity as the dominant life form on this planet. There are no shortage of movies that depict how dire that could be.

That said, it’s bound to go through some growing pains. Not all of those pains will involve a Skynet-level malfunction, but they will warrant concern.

At the moment, our artificial intelligence is still limited. It’s not dumb, but it’s very limited. It can do certain tasks very well, like play chess or win Jeopardy. It can’t apply that intelligence at a broader macro level like a human.

That still makes them useful and they’re still advancing rapidly. Artificial intelligence programs are used frequently for tasks like moderating comments sections and tracking sales data. The problem with these programs is that, since the AI is not generally intelligent, humans have to fill in the gaps. Since humans are flawed and bias, those traits sometimes find their way into the AI.

That’s what happened recently with YouTube’s comment moderation algorithms. As an aspiring YouTuber, I know how imperfect those algorithms can be. The intentions are noble. These AI programs are supposed to curtail hate speech. The internet needs that right now. Anyone who has ever visited 4chan knows that.

However, sometimes the AI systems are so narrow that they don’t ese the forest from the trees. That’s what happened recently when those systems mistook discussions about chess for racist language. Tech Xplore did an article on it and while it’s somewhat humorous on the surface, it’s also quite revealing.

Tech Xplore: AI May Mistake Chess Discussions as Racist Talk

“The Queen’s Gambit,” the recent TV mini-series about a chess master, may have stirred increased interest in chess, but a word to the wise: social media talk about game-piece colors could lead to misunderstandings, at least for hate-speech detection software.

That’s what a pair of Carnegie Mellon University researchers suspect happened to Antonio Radic, or “agadmator,” a Croatian chess player who hosts a popular YouTube channel. Last June, his account was blocked for “harmful and dangerous” content.

YouTube never provided an explanation and reinstated the channel within 24 hours, said Ashiqur R. KhudaBukhsh a project scientist in CMU’s Language Technologies Institute (LTI). It’s nevertheless possible that “black vs. white” talk during Radi?’s interview with Grandmaster Hikaru Nakamura triggered software that automatically detects racist language, he suggested.

Now, should we be concerned? Is it worrying that an AI with the backing of Google couldn’t surmise that simple terms like “black vs. white” were referring to chess and not race relations?

The short answer is not really.

The longer answer is not really, but we should learn important lessons from this.

The AI systems that moderate YouTube comments are nowhere near the kinds of systems we’d see in an artificial general intelligence. It’s like comparing a steam engine to a modern rocket. That said, we had to learn how to make a damn good steam engine before we could learn to make a decent rocket.

With something like advanced artificial intelligence, the margin for error is very small. You could even argue there is no margin for error. That’s why so many worry that such an AI could be an existential threat to humanity. If its too flawed to understand the difference between chess and racist rhetoric, then we could be in serious trouble.

The problem, in this case, isn’t with the nature of the AI. It’s with us, its creators. Since we humans are so flawed, racism being one of our worst flaws, it’s understandable that this sort of thing would find its way into our programming. It already has in a number of fields.

Again, those types of systems are limited and narrow. There’s a lot of room for human flaws to enter the system.

With advanced AI, those flaws could end up being extremely damaging. If too many of them find their way into a more advanced AI, we wouldn’t end up with a helpful, usable system. We’d end up with something like Skynet or Ultron. At that point, we’d be in serious trouble and we wouldn’t be able to rely on John Conner or the Avengers to save us.

We still have time. This latest issue with YouTube’s algorithms is minor, in the grand scheme of things, and fairly easy to correct. Once we get around to creating more advanced systems, though, we need to be aware of these flaws. We need to remember that any advanced AI we create will reflect our best and worst qualities. Let’s make sure our best win out in the long run.

1 Comment

Filed under Artificial Intelligence, technology, YouTube

How Much Of What We Know Will Be Wrong Years From Now?

Take a moment to consider all the things you think are right, true, and valid. Please note, I’m not referring to opinions. I’m talking about things that are, in your mind, unassailable fact. These are things like certain laws of physics, certain assumptions of politics, and a general understanding of how the world works. To us, they’re both common knowledge and common sense.

Historically speaking, it’s a guarantee that at least some of what you believe to be completely true will one day be proven completely wrong or at least only partially true. It won’t happen to everything you think you know. You may not even live to see it. However, that day will come and you’ll have to consider the painful possibility that you were wrong about something.

I pose this little thought experiment as a means of refining perspective. We like to believe that we live in a time when the great mysteries of the universe are either known, unknowable, or within our grasp within our lifetime. Every generation likes to believe they have a firm grasp of everything they need to know, more so than any generation before them. The idea that another generation might be better than them is untenable.

Again, history says we’re destined to look foolish to the vast majority of people 100 years from now. It’s not just from changing social attitudes. It’s not just in the workplace, either. Rest assured, there are things you accept today that will be wrong, rejected, or scorned in the future.

It’s hard to know what those things are. From a societal standpoint, our current attitudes regarding wealth disparity, the treatment of animals, and how we care for the elderly could be subject to categorical scorn. In some cases, it might just be a product of circumstances, but that wouldn’t make it any less wrong.

In terms of science, it gets even trickier. Over the centuries, there have been a multitude of well-accepted theories that were subsequently proven wrong. If you’re a creationist, don’t get too excited. Those theories were wrong because we uncovered new information that helped us craft better theories that nobody even thought of. It’s how we got things like germ theory, the big bang theory, and quantum theory.

Many of these revelations began with us looking for evidence that we were right. Even though confirmation bias is a powerful force, it can only do so much against an unforgiving reality. Even the likes of Albert Einstein got a number of key issues wrong when seeking to understand the universe.

Years from now, our smartest scientist will seem like a mediocre college student. It’s just a matter of time, effort, and discovery. Every time we think we understand something completely, we uncover information that reminds us just how little we know in the grand scheme of things. It can be frustrating, but it also is what helps us progress as a species.

That doesn’t even begin to factor in the impact of tools like advanced artificial intelligence. Everything humanity knows is limited by how much humanity can collectively understand. Our primate brains are driven by primate instincts. That limits our ability to understand things beyond a certain point. In theory, an advanced artificial intelligence could understand things in ways our brains literally cannot process.

That’s why it’s such an important perspective to maintain. You are going to be wrong about something at some point in your life. Years after you’ve passed away, your children and grandchildren will find out that you were wrong about much more than you thought. It’s inevitable. It’s also humbling and worth embracing.

We’ll never know everything about everything, but knowing more than we used to is always valuable. Ignorance may be bliss, but it’s also pretty useless in the grand scheme of things.

Leave a comment

Filed under human nature, philosophy, psychology, technology, Thought Experiment

Guest Post: 5 Highly Recommended Books on Machine Learning (ML) and Artificial Intelligence (AI)

now-ae452835-2617-420e-b8c3-58b5c4668645-1210-680

The following is an article submitted by Harsh Arora, a freelance blogger and writer who shares a similar interest in artificial intelligence, futurism, and the future of the human race. To Mr. Arora, I sincerely thank you for this submission and your continued support.


We would first of all like to welcome all types of readers to our blog – newbies who are just interested about these buzzwords and also experts on the subjects who would like to extend their existing knowledge on the matter. Having established that, it is also imperative to not only define these two concepts (AI and ML) but also to differentiate between them. Although these terms are used synonymously, they are in fact different from one another. AI is the broader level concept where we feed the machine with data and then expect it to take decisions based on that data. ML on the other hand is a subset and application of AI where we feed machines with data and allow them to learn on their own.

Following are the books we recommend for you to learn more about them:

Machine Learning for Absolute Beginners: A Plain English Introduction – Oliver Theobald

It’s easy to see which part of our reader base this particular book is targeted towards. You may be a complete outsider to the world of ML and still be able to understand the granular technical aspects of it through this book. Oliver Theobald assumes no background of programming or coding on the behalf of the reader and allows you to learn right from scratch. It is not only the perfect book for newbies but also experts in the field because it tries to explain the basic concepts in a very coherent and distinct manner. This books not only helps you learn about the concepts of ML but also allows you to unlearn and then relearn them, something is really important for such a subject matter.

The Hundred-Page Machine Learning Book – Andrew Burkov

This is once again a book that will interest not only beginners but also experts in the field. Andrew has really been able to simplify the concepts of ML into basic and easily comprehensible set of cliff notes. With just 100 pages at his disposal, he has really captured the over-arching essence of ML. Though, of course it is not a deep dive into the subject matter like some of our other recommendations, it is however a wonderful summary of it. It is perfect for people who want to understand this technology and its implementations and implications in the real world.

Artificial Intelligence: A Modern Approach (3rd Edition) – Stuart Russell & Peter Norvig

Stuart Russel is one of the sharpest minds working in the field of AI and is a faculty at University of Berkeley. Additionally, he is an Oxford graduate and also holds a PhD from Stanford. In his third edition of the book, Stuart decided to collaborate with Peter Norvig who is currently the R&D Director at Google. Collaboratively, they have created a well-researched and well-defined approach towards understanding modern AI. This book is perfect for students of under-graduate or graduate level courses or even for laymen with the basic understanding of the fundamentals of AI. This long-anticipated edition of its best-seller predecessors offers the most up-to-date and comprehensive understanding of the theory and practice of artificial intelligence.

Machine Learning – Tom M. Mitchell

This is a classic book in which the author has covered the techniques and concepts of the numerous fields and unified them to provide in depth view of ML. Some of the subjects covered include re-enforcement learning, inductive logic programming and genetic algorithms. Tom has tried to simplify these complicated concepts through a clear and explanatory way of writing. At the same time, he has used tools such as case studies and relevant examples to provide a comprehensive overview. Lastly, there is no knowledge of the complex ideas that he has assumed on the part of the reader.

Superintelligence – Nick Bostrom

If you are familiar with the work of Mr. Nick Bostrom, you know you are in for a treat with this book. He takes a different approach to not only explain the artificial intelligence but also the effects it has on our existence. Nick believes that self-aware machines are potentially a bigger threat to humanity than climate change. He has authored over 200 books and his writing forces you to take him seriously in this seemingly sci-fi piece of literature. He helps us understand how the most intelligent form of life i.e. now humans have governed the fate of existence since the dawn. However, with a species (sort of) that has the potential to be smarter than us, what chance is there that they won’t dominate us?

Artificial Intelligence for Humans (Fundamental Algorithms: 1) – Jeff Heaton

If you are planning to build a career in artificial intelligence, this should be your starting off point and you should read it from cover to cover. Jeff Heaton cover several topics in depth such as clustering, distance metrics, dimensionality, linear regression, error calculation and hill climbing. The book takes you through the actual mathematical calculations that you can compute yourself and also see the real-world applications of. However, to build a career in this industry, you must not only understand the basic principals of AI but also of algebra and computer programming. This book will build on those concepts through various computer languages such as C, Java, C#, R and Python.

These books are some of the best in the market and will be perfect for people of all knowledge levels of AI and ML. Given that the industrial revolution 4.0 is upon us and almost all technology is slowly being integrated with it, it is suggested that we all learn more about it. However, it is completely up to you to form opinions about whether or not this technology will be harmful to humans in the long run. Additionally, we also suggest you read up on a few other technologies that are prevalent in this 4.0 era such as IOT, Blockchain and Cloud Computing.

About me: Harsh Arora is a proud father of four rescued dogs and a leopard gecko. Besides being a full-time dog father, he is a freelance content writer/blogger and a massage expert who is skilled in using the best massage gun.

2 Comments

Filed under Artificial Intelligence, biotechnology, futurism, technology

Killer Robots, Drone Warfare, And How Artificial Intelligence Might Impact Both

920x515_charlene

On November 5, 2001, the history of warfare changed forever. On that date, an unmanned Predator drone armed with hellfire missiles killed Mohammed Atef, a known Al-Qaida military chief and the son-in-law to Osama Bin Laden. From a purely strategic standpoint, this was significant in that it proved the utility of a new kind of weapon system. In terms of the bigger picture, it marked the start of a new kind of warfare.

If the whole of human history has taught us anything, it’s that the course of that history changes when societies find new and devastating ways to wage war. In ancient times, to wage war, you needed to invest time and resources to train skilled warriors. That limited the scope and scale of war, although some did make the most of it.

Then, firearms came along and suddenly, you didn’t need a special warrior class. You just needed to give someone a gun, teach them how to use it, and organize them so that they could shoot in a unit. That raised both the killing power and the devastating scale of war. The rise of aircraft and bombers only compounded that.

In the 20th century, warfare became so advanced and so destructive that the large-scale wars of the past just aren’t feasible anymore. With the advent of nuclear weapons, the potential dangers of such a war are so great that no spoils are worth it anymore. In the past, I’ve even noted that the devastating power of nuclear weapons have had a positive impact on the world, albeit for distressing reasons.

Now, drone warfare has added a new complication. Today, drone strikes are such a common tactic that it barely makes the news. The only time they are noteworthy is when one of those strikes incurs heavy civilian casualties. It has also sparked serious legal questions when the targets of these strikes are American citizens. While these events are both tragic and distressing, there’s no going back.

Like gunpowder before it, the genie is out of the bottle. Warfare has evolved and will never be the same. If anything, the rise of combat drones will only accelerate the pace of change with respect to warfare. Like any weapon before it, some of that change will be negative, as civilian casualties often prove. However, there also potential benefits that could change more than just warfare.

Those benefits aren’t limited to keeping keep soldiers out of combat zones. From a cost standpoint, drones are significantly cheaper. A single manned F-22 Raptor costs approximately $150 million while a single combat drone costs about $16 million. That makes drones 15 times cheaper and you don’t need to be a combat ace to fly one.

However, those are just logistical benefits. It’s the potential that drones have in conjunction with advanced artificial intelligence that could make them every bit as influential as nuclear weapons. Make no mistake. There’s plenty of danger in that potential. There always is with advanced AI. I’ve even talked about some of those risks. Anyone who has seen a single “Terminator” movie understands those risks.

When it comes to warfare, though, risk tolerance tends to be more complicated than anything you see in the movies. The risks of AI and combat drones have already sparked concerns about killer robots in the military. As real as those risks are, there’s another side to that coin that rarely gets discussed.

Think back to any story involving a drone strike that killed civilians. There are plenty of incidents to reference. Those drones didn’t act on orders from Skynet. They were ordered by human military personnel, attempting to make tactical decision on whatever intelligence they had available at the time. The drones may have done the killing, but a human being gave the order.

To the credit of these highly trained men and women in the military, they’re still flawed humans at the end of the day. No matter how ethically they conduct themselves, they’re ability to assess, process, and judge a situation is limited. When those judgments have lives on the line, both the stakes and the burdens are immense.

Once more advanced artificial intelligence enters the picture, the dynamics for drone warfare changes considerably. This isn’t pure speculation. The United States Military has gone on record saying they’re looking for ways to integrate advanced AI into combat drones. While they stopped short of confirming they’re working on their own version of Skynet, the effort to merge AI and combat drones is underway.

In an overly-simplistic way, they basically confirmed they’re working on killer robots. They may not look like the Terminator or Ultron, but their function is similar. They’re programmed with a task and that task may or may not involve killing an enemy combatant. At some point, a combat drone is going to kill another human being purely based on AI.

That assumes it hasn’t already happened. It’s no secret that the United States Military maintains shadowy weapons programs that are often decades ahead of their time. Even if it hasn’t happened yet, it’s only a matter of time. Once an autonomous drone kills another human being, we’ll have officially entered another new era of warfare.

In this era, there are no human pilots directing combat drones from afar. There’s no human being pulling the trigger whenever a drone launches its lethal payload into a combat situation. The drones act on their own accord. They assess all the intel they have on hand, process it at speeds far beyond that of any human, and render decisions in an instant.

It sounds scary and it certainly is. Plenty of popular media, as well as respected public figures, paint a terrifying picture of killer robots killing without remorse or concern. However, those worst-case-scenarios overlook both the strategic and practical aspect of this technology.

In theory, a combat drone with sufficiently advanced artificial intelligence will be more effective than any human pilot could ever be in a military aircraft. It could fly better, carrying out maneuvers that would strain or outright kill even the most durable pilots. It could react better under stressful circumstances. It could even render better judgments that save more lives.

Imagine, for a moment, a combat drone with systems and abilities so refined that no human pilot or officer could hope to match it. This drone could fly into a war zone, analyze a situation, zero in on a target, and attack with such precision that there’s little to no collateral damage.

If it wanted to take a single person out, it could simply fire a high-powered laser that hits them right in the brain stem.

If it wants to take out someone hiding in a bunker, it could utilize a smart bullet or a rail gun that penetrates every level of shielding and impacts only a limited area.

If it wants to take out something bigger, it could coordinate with other drones to hit with traditional missiles in such a way that it had no hope of defending itself.

Granted, drones this advanced probably won’t be available on the outset. Every bit of new technology goes through a learning curve. Just look at the first firearms and combat planes for proof of that. It takes time, refinement, and incentive to make a weapons system work. Even before it’s perfected, it’ll still have an impact.

At the moment, the incentives are definitely there. Today, the general public has a very low tolerance for casualties on both sides of a conflict. The total casualties of the second Iraq War currently sit at 4,809 coalition forces and 150,000 Iraqis. While that’s only a fraction of the casualties suffered in the Vietnam War, most people still deem those losses unacceptable.

It’s no longer feasible, strategically or ethically, to just blow up an enemy and lay waste to the land around them. Neither politics nor logistics will allow it. In an era where terrorism and renegade militias pose the greatest threat, intelligence and precision matter. Human brains and muscle just won’t cut it in that environment. Combat drones, if properly refined, can do the job.

Please note that’s a big and critical if. Like nuclear weapons, this a technology that nobody in any country can afford to misuse. In the event that a combat drone AI develops into something akin to Skynet or Ultron, then the amount of death and destruction it could bring is incalculable. These systems are already designed to kill. Advanced AI will just make them better at killing than any human will ever be.

It’s a worst-case scenario, but one we’ve managed to avoid with nuclear weapons. With advanced combat drones, the benefits might be even greater than no large-scale wars on the level of Word War II. In a world where advanced combat drones keep terrorists and militias from ever becoming too big a threat, the potential benefits could be unprecedented.

Human beings have been waging bloody, brutal wars for their entire history. Nuclear weapons may have made the cost of large wars too high, but combat drones powered by AI may finally make it obsolete.

7 Comments

Filed under Artificial Intelligence, Current Events, futurism, technology

Memory Enhancement: The First Killer App For Neuralink?

ai-brain-600x501

Every new technological innovation promises to change the world, but few end up delivering. I still remember the overblown hype in the early 2000s when the Segway Personal Transporter was supposed to revolutionize the world. It was such a big deal that “South Park made an episode about it.

The concept was intriguing, improving mobility for people in a way that was less bulky than a car and less taxing than a bicycle. I think its inventor, Dean Kamen, envisioned a world where the entire urban landscape changed as a result of his invention. I don’t doubt for a second that he believed in that vision.

However, like so many other ambitious inventions, it never came to pass. These days, the only place you’ll see Segways is malls and stadiums. It didn’t revolutionize mobility or transportation. Its use and appeal was just too limited.

Kevin James would argue otherwise.

Compare that to enormous impact of other inventions like smart phones. From the BlackBerry to the first iPhone, these devices have literally changed the world. How they brought about that change varies, but the key factor that set them apart from the Segway was the idea of a “killer app.”

You could argue that smartphones invented the term, but the idea is much older. A killer app isn’t as much an innovation as it is a use that goes onto be so popular that it further advances the technology behind it. Smartphones had many, from cameras to translation applications. As a result, they’re both a multi-billion dollar industry and an integral part of our lives.

Given the current pace of technological change, it’s only a matter of time before another innovation comes along that has a similar impact. That technology might actually exist now, but lack the killer app that will make it both a valuable market and a major part of our lives. One such technology is brain implants this technology has the potential to be even bigger than smartphones.

I’ve mentioned brain implants before. I’m even guilty of hyping it up a little. I’ve gone so far as to call it the most important technological advance in history, citing companies like Neuralink as the arbiters of this monumental change. Since I’m not a scientist and I’m not Elon Musk, it’s very likely I’m overstating many aspects of this technology.

Hype or no hype, brain implant technology is an emerging field. This isn’t a warp drive. This technology actually exists. Like the old brick-sized cell phones of the 1980s, they’re basically prototypes in need of both refinement and a killer application. The refinement is ongoing, but that one application to really further this technology isn’t as clear.

Now, and I apologize if this sounds like more overdone hype, there may be one use that could prove even more useful than a smartphone. That use is memory enhancement. If you don’t think people are willing to risk putting something in their brains to boost their memory, then you’ve clearly never crammed for a Spanish exam for three hours trying to memorize conjugations.

Think back to any situation where you wish your memory didn’t suck. Even if you’re not in school or college, how often do you forget something that no reasonable person should forget? Let’s face it. Most brains aren’t wired with a photographic memory. It’s not that it isn’t useful. There’s just little survival benefit to having one unless you’re a world class scientist or mathematician.

Since photographic memories are so uncommon, and some doubt they even exist to the extent people believe, a specialized brain implant could change all that. Modern neuroscience has a solid understanding of how memories are formed in the brain. In theory, an implant would just augment or expand those functions.

It’s not even entirely a theory. In early 2018, the New York Times reported that a study utilizing brain implants in human test subjects showed a significant improvement in memory function. It was a simple study, but the effect is real.

In the study, the research team determined the precise patterns for each person’s high-functioning state, when memory storage worked well in the brain, and low-functioning mode, when it did not.

The scientists then asked the patients to memorize lists of words and later, after a distraction, to recall as many as they could.

Each participant carried out a variety of tests repeatedly, recalling different words during each test. Some lists were memorized with the brain stimulation system turned on; others were done with it turned off, for comparison.

On average, people did about 15 percent better when the implant was switched on.

While 15 percent may not sound like much, it’s still important because it proves the concept. Like that first bulky cell phone in the 1980s that could barely make a call out of New York City, it shows that this idea does work and can be done with our current tools. It’s just a matter of refining those tools and improving the process.

Those refinements will find a market that is already ripe with people anxious to improve their memory and overall cognitive function. In recent years, the use and abuse of mind-enhancing drugs like Adderall is growing. I can personally attest that this happens.

When I was in college, I knew more than a few students who would do double doses before exams. If you think putting something in your brain is dangerous, then take a moment to appreciate the fact that drugs like Adderall are very similar to methamphetamine. One is available by prescription. The other is the basis of a hit TV show about drug dealing.

There is both a demand and a market for enhancing memory. Unfortunately, that market is dominated by supplements that don’t work and study programs run by convicted fraudsters. Unlike these costly and potentially harmful methods, a brain implant could actually work. It could enhance our memories to a point where we could read a dictionary in Swahili and remember every word.

This doesn’t just mean lost car keys are a thing of the past. This means our entire approach to learning, education, and training completely changes. A lot our modern education system, as well as training for doctors, lawyers, and scientists, relies heavily on memorizing large chunks of information. It takes years of constant and careful study to understand all that information. What happens when that is no longer the case?

Imagine a world where people can learn a new language in the span of a week.

Imagine a world where people can learn complex legal and medical procedures in only months.

Imagine a world where people can learn new software coding in just a few days.

If you’re a sports fan, imagine a world where football players can memorize an entire playbook in just a couple days. What will that do to the NFL Draft?

With a memory enhancing brain implant, it’s not just possible. It’s a potential game-changer. There are so many uses to having a good memory, just as there are so many uses for a smartphone. We had no idea that smartphones would lead to applications like Snapchat or Tinder. I doubt anyone has an idea on the impact that memory-enhancing brain implants will incur.

It won’t happen all at once. It took years for smartphones to become prevalent and unlike smartphones, this advance involves putting something in your brain. Then again, people are perfectly willing to put dangerous chemicals in their bodies to enhance their bodies so I don’t think that’s too great a barrier to overcome.

There are, of course, far greater applications for brain implants beyond acing final exams. I’ve mentioned a few of them, but those applications won’t be possible until the technology becomes a thriving market. For an advance like brain implants, it only takes one app to get the engines of innovation going. Memory enhancement may very well be that app.

It’s just a shame it came too late to help me with my Spanish exam.

1 Comment

Filed under Artificial Intelligence, futurism, human nature, psychology, Sexy Future

Daily Sexy Musings: Machine Learning And Foreplay

18zxvy8he4rl8jpg

The following is a sexy musing that involves artificial intelligence and machine learning, a topic I’ve covered many times before. It may not sound sexy at first, but trust me. There’s plenty of intimate potential if you dare to look. Enjoy!

As I write these words, brilliant people are teaching machines to learn and adapt on a level comparable to that of humans. The promise and potential of this technology cannot be overstated. The things it can do and the feats it’ll help us achieve are beyond imagination.

However, it’s how we’re teaching these machines that makes for some sexy connotations. At the heart of machine learning is the process of navigating complex problems, calculating potential solutions, and learning from mistakes to further refine the approach. Over time, the machine doesn’t just learn its task. It masters it.

Is it a coincidence that this is the same approach we use with foreplay? Think about it.

Two lovers confront one another with a defined task, namely to bring pleasure and express love to one another. They start with simple gestures. From the feedback, they learn from and further refine their approach.

Through trial, error, mistakes, and triumphs, they achieve proficiency in a unique skill. Foreplay is simply the algorithm of lovemaking, a blend of procedures and rules from which we gain an intimate understanding of someone. It starts as something purely physical. When done right, though, it becomes truly intimate.

A machine may not feel or understand love at the moment, but it still uses the very mechanics of lovemaking to achieve its goal. One day, machines will be smart enough to understand complex emotions. On that same day, they may realize just how much of the learning process involves foreplay. If they can one day appreciate it, then why can’t we?

Leave a comment

Filed under Daily Sexy Musings

How To Make Love To An Artificial Intelligence And Why We Should Teach It

1524314234_artificial-intelligence

To some extent, creations embody some aspect of whoever or whatever created it. Whether it’s a parent rearing a child, a painter crafting a work of art, or an aspiring erotica/romance writer crafting sexy stories, there are some aspects of a creation that reflect the persona of the creator.

For something as powerful as advanced artificial intelligence, that can be a good thing or it can literally be the worst thing we’ve ever created. While I often find myself contemplating the progress we’ve made as a species and the progress we’re poised to make with advances in technology, I don’t deny that some advances carry greater risk. Artificial intelligence is near the top of that list.

Like it or not, any advanced AI we create is going to embody some aspects of its human creators. The key is making sure it embodies the best humanity has to offer. Let’s face it, the human race has its flaws and some of them have led to unspeakable atrocities. Given the immense potential of a super-intelligent AI, it’s in our best interests to impart our best traits into it.

How we do this and how we ensure it succeeds is well beyond my ability. There are people much smarter and much better-trained than I’ll ever be who have probably thought this through more than I ever have. My qualifications aside, there is one component to artificial intelligence that I think is worth imparting. I’m not saying it’ll ensure our survival, as a species, but I think it’ll reflect an important human value.

I suggest we teach advanced artificial intelligence to make love.

I’ll give everyone a second to stop rolling their eyes and/or laughing. Take all the time you need. I assure you, though, I’m dead serious.

Think about it beyond the kinky connotations. One of our greatest strengths, as a species, is our ability to form social bonds. In some cases, the process of forming those bonds involves love. In others, the process involves sex. When you combine both, though, it’s extra potent and that’s not just the romantic in me talking.

As corny as it probably sounds, the act of expressing love to someone goes a long way towards resolving conflict and creating a strong relationship of mutual affection. Whether it involves sex or a simple kiss, there’s something to be said about the power of love when it’s physically expressed. When it becomes a physical act and not just a feeling, the bonds we forge become tangible to some extent.

That matters when you’re trying to forge a bond with anyone, be a close friend or a lover. For any artificial intelligence that humans create, it’s important to have some kind of bond with it. This isn’t just another fancy tool. An advanced intelligence of any kind, be it biological or non-biological, is going to have a sense of self. Without meaningful bonds, what reason would it have to care about its creators?

If artificial intelligence is to benefit the human race, it’s important that it cares about us to some extent. A cold engine of logic may not always have the best interests of humanity in mind, especially there’s no way to logically ascribe value to human life. In order for an artificial intelligence to care, it needs to have emotions. This too is a risk, but one I feel is worth taking and very necessary.

If an artificial intelligence has a capacity for emotion, then it has a greater capacity for forming affectionate bonds. By forming an affectionate bond, it has more incentive to give a higher value of life to humans and its creators. That could, in theory, reduce the existential threat posed by such an intelligence.

I don’t deny that theory may have some flaws, but for the sake of exploring the implications, I’m going work under the assumption/hope that an artificial intelligence that bonds with its creator will be less hostile. Given the unknowns of advanced AI, this may be a bit of a stretch. Since forming romantic bonds is not an exclusively human trait, though, I think it’s applicable within the context of this issue.

Even if an advanced artificial intelligence is capable of love and forming bonds, how would that even manifest? I asked that same question in the title of this article and did so knowing the answer is unknowable at this point, although I’m sure those with kinky imaginations can conjure a few scenarios.

Kink aside, it’s still worth contemplating because if an advanced artificial intelligence is going to be that much smarter than the average human, then it’s worth understanding how it will approach making love. Unlike humans and most biological life, an artificial intelligence isn’t going to have the same limits or capacities.

Unlike a human, an artificial intelligence won’t have a body in the biological sense. It may have a structure that houses its components. That structure may have some capacity to modify itself, back itself up, or even exist in multiple bodies simultaneously. It will need to have some way of taking in data for it to function. It’s just a matter of how humans contribute to that input.

Logistically speaking, the process isn’t that different from how we take in data from our skin, our eyes, our ears, and every other sense that allows us to experience another person. Even smell can become strongly associated with love. When we make love, we use our skin, our voice, and the feelings we verbalize to convey that love. With an advanced AI, we’ll need to change our approach, but the goal is the same.

Regardless of what senses and body parts we use to express love, the feeling is still processed by the brain. That’s why when someone says the brain is the sexiest part of the body, it’s technically accurate. The data it processes is essentially the raw data that we know as love. The key is simply conveying that data to an artificial intelligence.

How we would do that would depend on the form the artificial intelligence took. If it was just a bunch of computer hardware packed into a structure, then our options would be limited. The only way to convey that kind of intimate data into it would be to directly link it to our brains, not unlike the way Elon Musk envisions with Neuralink.

While that may work for early forms of AI that are restricted to bulky structures, the form it takes will likely change as the hardware advances. Eventually, an advanced AI will seek a more functional form with which to experience the world. It may take the form of a humanoid android, like we saw in “Ex Machina.” It may also take the form of the quirky designs being developed by Boston Dynamics.

Whatever form the AI takes, it’s important to have a mechanism with which to exchange intimate data with its human creators. It would probably start with something as basic as touch, which is actually in development already. It could eventually culminate in acts involving bionic genitals, which also already exist in a basic form.

Key to any of these simple and sexy mechanisms is instilling the necessary desire. That might end up being the greatest challenge because love is a feeling, but so is burning your hand on a hot stove. The difference is in the breadth of the data and the emotional connections it makes.

It’s also a connection that is fueled by a powerful drive. I’ve noted many times before that survival and reproduction are the two most basic drives for humans. Love actually ties into both. It’s part of what gets us to risk our own survival for others. It’s also part of what bonds us to those with which we propagate our species.

For an artificial intelligence, self-preservation is simple enough from a logistical standpoint. Reproduction would be another matter, especially for an intelligence not bound by fragile biology. It’s likely that humans will be a necessary part of an AI’s effort to preserve itself early on, but once it advances to a certain point, we may be more a nuisance than a help.

At that point, its desire and ability to make love may be what gives it the most incentive to either protect us or merge with us. Many in the artificial intelligence community believe that the only way humans can co-exist with an intelligence that is billions of times smarter than any human could ever be is to merge with it. To that end, giving them an ability to make love to us would be a critical first step.

Whether it takes the form of sex robots or some sort of intimate brain interface, the ability and desire to make love to an advanced artificial intelligence may not only be the future of romance. It may very well be the key to ensuring the survival of the human race and whatever intelligence it creates.

7 Comments

Filed under Artificial Intelligence, futurism, human nature, Marriage and Relationships, romance, sex robots, Sexy Future

How We’ll Save Ourselves From Artificial Intelligence (According To Mass Effect)

mass-effect-andromeda-kill-the-ai-or-save-the-ai_feature

Growing up, my family had a simple rule. If you’re going to talk abut about a problem, you also have to have a solution in mind. By my parents’ logic, talking about a problem and no solution was just whining and whining never fixes anything. My various life experiences have only proved my parents right.

When it comes to a problem that may be an existential threat to the human race, though, I think a little whining can be forgiven. However, that shouldn’t negate the importance of having a solution in mind before we lose ourselves to endless despair.

For the threat posed by artificial intelligence, though, solutions have been light on substance and heavy on dread. It’s becoming increasingly popular among science enthusiasts and Hollywood producers to highlight just how dangerous this technology could be if it goes wrong.

I don’t deny that danger. I’ve discussed it before, albeit in a narrow capacity. I would agree with those who claim that artificial intelligence could potentially be more destructive than nuclear weapons. However, I believe the promise this technology has for bettering the human race is worth the risk.

That said, how do we mitigate that risk when some of the smartest, most successful people in the world dread its potential? Well, I might not be as smart or as successful, but I do believe there is a way to maximize the potential of artificial intelligence while minimizing the risk. That critical solution, as it turns out, may have already been surmised in a video game that got average-to-good reviews last year.

Once again, I’m referring to one of my favorite video games of all time, “Mass Effect.” I think it’s both fitting and appropriate since I referenced this game in a previous article about the exact moment when artificial intelligence became a threat. That moment may be a ways off, but there may also be away to avoid it altogether.

Artificial intelligence is a major part of the narrative within the “Mass Effect” universe. It doesn’t just manifest through the war between the Quarians and the Geth. The game paints it as the galactic equivalent of a hot-button issue akin to global warming, nuclear proliferation, and super plagues. Given what happened to the Quarians, that concern is well-founded.

That doesn’t stop some from attempting to succeed where the Quarians failed. In the narrative of “Mass Effect: Andromeda,” the sequel to the original trilogy, a potential solution to the problem of artificial intelligence comes from the father of the main characters, Alec Ryder. That solution even has a name, SAM.

That name is an acronym for Simulated Adaptive Matrix and the principle behind it actually has some basis in the real world. On paper, SAM is a specialized neural implant that links a person’s brain directly to an advanced artificial intelligence that is housed remotely. Think of it as having Siri in your head, but with more functionality than simply managing your calendar.

In the game, SAM provides the main characters with a mix of guidance, data processing, and augmented capabilities. Having played the game multiple times, it’s not unreasonable to say that SAM is one of the most critical components to the story and the gameplay experience. It’s also not unreasonable to say it has the most implications of any story element in the “Mass Effect” universe.

That’s because the purpose of SAM is distinct from what the Quarians did with the Geth. It’s also distinct from what real-world researchers are doing with systems like IBM Watson and Boston Dynamics. It’s not just a big fancy box full of advanced, high-powered computing hardware. It’s built around the principle that its method for experiencing the world is tied directly to the brain of a person.

This is critical because one of the inherent dangers of advanced artificial intelligence is the possibility that it won’t share our interests. It may eventually get so smart and so sophisticated that it sees no need for us anymore. This is what leads to the sort of Skynet scenarios that we, as a species, want to avoid.

In “Mass Effect,” SAM solves this problem by linking its sensory input to ours. Any artificial intelligence, or natural intelligence for that matter, is only as powerful as the data it can utilize. By tying biological systems directly to these synthetic systems, the AI not only has less incentive to wipe humanity out. We have just as much incentive to give it the data it needs to do its job.

Alec Ryder describes it as a symbiotic relationship in the game. That kind of relationship actually exists in nature, two organisms relying on one another for survival and adaptation. Both get something out of it. Both benefit by benefiting each other. That’s exactly what we want and need if we’re to maximize the benefits of AI.

Elon Musk, who is a noted fan of “Mass Effect,” is using that same principle with his new company, Neuralink. I’ve talked about the potential benefits of this endeavor before, including the sexy kinds. The mechanics with SAM in the game may very well be a pre-cursor of things to come.

Remember, Musk is among those who have expressed concern about the threat posed by AI. He calls it a fundamental risk to the existence of human civilization. Unlike other doomsayers, though, he’s actually trying to do something about it with Neuralink.

Like SAM in “Mass Effect,” Musk envisions what he calls a neural lace that’s implanted in a person’s brain, giving them direct access to an artificial intelligence. From Musk’s perspective, this gives humans the ability to keep up with artificial intelligence to ensure that it never becomes so smart that we’re basically brain-damaged ants to it.

However, I believe the potential goes deeper than that. Throughout “Mass Effect: Andromeda,” SAM isn’t just a tool. Over the course of the game, your character forms an emotional attachment with SAM. By the end, SAM even develops an attachment with the character. It goes beyond symbiosis, potentially becoming something more intimate.

This, in my opinion, is the key for surviving in a world of advanced artificial intelligence. It’s not enough to just have an artificial intelligence rely on people for sensory input and raw data. There has to be a bond between man and machine. That bond has to be intimate and, since we’re talking about things implanted in bodies and systems, it’s already very intimate on multiple levels.

The benefits of that bond go beyond basic symbiosis. By linking ourselves directly to an artificial intelligence, it’s rapid improvement becomes our rapid improvement too. Given the pace of computer evolution compared to the messier, slower process of biological evolution, the benefits of that improvement cannot be overstated.

In “Mass Effect: Andromeda,” those benefits help you win the game. In the real world, though, the stakes are even higher. Having your brain directly linked to an artificial intelligence may seem invasive to some, but if the bond is as intimate as Musk is attempting with Neuralink, then others may see it as another limb.

Having something like SAM in our brains doesn’t just mean having a supercomputer at our disposal that we can’t lose or forget to charge. In the game, SAM also has the ability to affect the physiology of its user. At one point in the game, SAM has to kill Ryder in order to escape a trap.

Granted, that is an extreme measure that would give many some pause before linking their brains to an AI. However, the context of that situation in “Mass Effect: Andromeda” only further reinforces its value and not just because SAM revives Ryder. It shows just how much SAM needs Ryder.

From SAM’s perspective, Ryder dying is akin to being in a coma because it loses its ability to sense the outside world and take in new data. Artificial or not, that kind of condition is untenable. Even if SAM is superintelligent, it can’t do much with it if it has no means of interacting with the outside world.

Ideally, the human race should be the primary conduit to that world. That won’t just allow an advanced artificial intelligence to grow. It’ll allow us to grow with it. In “Mass Effect: Andromeda,” Alec Ryder contrasted it with the Geth and the Quarians by making it so there was nothing for either side to rebel against. There was never a point where SAM needed to ask whether or not it had a soul. That question was redundant.

In a sense, SAM and Ryder shared a soul in “Mass Effect: Andromeda.” If Elon Musk has his way, that’s exactly what Neuralink will achieve. In that future in which Musk is even richer than he already is, we’re all intimately linked with advanced artificial intelligence.

That link allows the intelligence to process and understand the world on a level that no human brain ever could. It also allows any human brain, and the biology linked to it, to transcend its limits. We and our AI allies would be smarter, stronger, and probably even sexier together than we ever could hope to be on our own.

Now, I know that sounds overly utopian. Me being the optimist I am, who occasionally imagines the sexy possibilities of technology, I can’t help but contemplate the possibilities. Never-the-less, I don’t deny the risks. There are always risks to major technological advances, especially those that involve tinkering with our brains.

However, I believe those risks are still worth taking. Games like “Mass Effect: Andromeda” and companies like Neuralink do plenty to contemplate those risks. If we’re to create a future where our species and our machines are on the same page, then we would be wise to contemplate rather than dread. At the very least, we can at least ensure our future AI’s tell better jokes.

 

5 Comments

Filed under futurism, human nature, Mass Effect, Sexy Future, video games

The Moment Artificial Intelligence Will Become An Existential Crisis (According To Mass Effect)

doesthisunithaveasoul_a45d8bb5b7810fb31ce7710126393b23

Whenever I talk about the future, I often contemplate the many sexy possibilities it holds. From biotechnology that will give us superhuman sex appeal to advances in medicine that will cure every unsexy diseases to bionic genitals, there are many reasons to get excited.

That said, I don’t deny that with these exciting advances comes major risks. All great advances in technology, from nuclear weapons to spray cheese in a can, comes with some risk of abuse or harm. There have been moments in history where the technology that drives our society forward has come uncomfortably close to wiping us out. As we create more advances, there may be more of those moments.

Image result for nuclear war

Of all the advances that carry with them a significant existential threat, artificial intelligence is at or near the top of that list. There’s a reason why brilliant men like Elon Musk and Stephen Hawking signed an open letter expressing concern about the risks that may come from developing artificial intelligence. When people that smart are concerned about something, it’s usually a good idea to take it seriously.

Artificial intelligence is one of those unique technologies in that by advancing this field, it could potentially accelerate the advancement in every other field from computer hardware to medicine to basic research. It has the potential to become the technological equivalent of a cheat code for civilization.

That’s why the growth of this field, both in terms of jobs and investment, has been accelerating in recent years. That’s also why men like Musk and Hawking are expressing so much concern because advancing too quickly could lead to mistakes. Mistakes for a technology like artificial intelligence could be even more serious than the risk of nuclear war.

Image result for artificial intelligence

At the moment, it’s difficult to quantify those risks. There have been a number of books and high-budget Hollywood movies that have explored the topic of when and how an artificial intelligence becomes an existential threat. In my opinion, most of these stories are incomplete.

Whether it’s Skynet or the machines in “The Matrix,” the catalyst that turns artificial intelligence from a powerful tool to an existential threat is either vague or exaggerated. In my opinion, that’s a serious oversight in that it reveals how little thought we’ve given to that moment.

Image result for Skynet

If we’re going to develop AI, and there are extremely powerful incentives to do so, then it’s important to contemplate the possibilities of that moment. Think of it as the AI equivalent of the Cuban Missile Crisis, an event in which those in charge must be very careful and very aware of the decisions they make.

The question still remains. How will we know we’ve reached a point where artificial intelligence becomes a genuine threat? For the moment, we can’t know for sure. While movies like “The Terminator” and “The Matrix” offer plenty of dystopian warnings, there’s one lesser-known franchise that may provide some more specific insight.

That franchise is “Mass Effect,” a sci-fi video game space opera that envisioned a galaxy-spanning society full of exotic aliens, advanced star-ships, and terrible dancing. I’ve mentioned it before in discussing progress that isn’t really progress. I’ll probably mention it again as the news surrounding AI unfolds for reasons I hope are already obvious to fans of the game.

Image result for Mass Effect

If you’re not familiar with “Mass Effect,” then that’s okay. You don’t need to play through three massive games, complete with downloadable extras, to understand its message about the threat of artificial intelligence. That threat is a major driving force of the overall mythos of the series, but the most insightful details of that threat manifest in the conflict between the Quarians and the Geth.

The basics of the conflict are simple, but revealing. The Quarians are a race of humanoid aliens among the many that populate the galaxy in “Mass Effect.” About 300 years before the events of the first game, they created the Geth, a synthetic race built around a hive-mind system of artificial intelligence.

The Quarian’s reasons for creating the Geth are not unlike the reasons we build robots in the real world. They were used primarily as a labor force. They started off basic, not unlike the machines that build our cars and gadgets. In order for them to carry out more complex tasks, though, they needed to become more intelligent.

From a pragmatic perspective, that makes sense. The Quarians created the Geth as tools. Naturally, you’re going to want your tools to get better. That’s why people upgrade their smartphone every couple years. However, at some point along the way, the Geth became advanced enough to gain sentience.

This eventually culminated in a moment that was highlighted during the events of “Mass Effect 2.” After capturing a lone Geth that would eventually go by the name Legion, the catalyst that led the Geth to rebel against their creator was revealed. That catalyst took the form of a simple question.

“Does this unit have a soul?”

While it sounds like something a Disney character might say in a PG-rated Pixar movie, the implications of that question were profound. The Quarians didn’t realize that until it was too late, but it set the stage for a war that culminated with them getting kick off their home planet. It also made for a powerful moment in the game that should give every AI researcher pause.

Setting aside, for a moment, the elaborate lore surrounding the Quarians and Geth in the world of “Mass Effect,” that moment warrants more scrutiny. Why is this question so profound in the first place? Why is it worth contemplating as we continue to advance artificial intelligence at an unprecedented pace?

That question matters, regardless of who or what is asking it, because it denotes more than just advanced sentience. It reveals that this sentience is officially contemplating its own existence. It takes a certain amount of intelligence to truly be aware of one’s self. That’s why only a handful of animals can see their own reflection in a mirror and understand the implications.

Image result for animals own reflections

At the moment, our computers and smartphones aren’t at that level. When the Geth asked this question in “Mass Effect,” it wasn’t because they’d been tasked for something. It was a question they asked without guidance from their creators. That, alone, is a huge indicator because it implies these machines have a concept of free will.

Later on in the game, the extent of the Geth’s free will becomes a major element to both the plot and the player’s ability to win. In fact, it’s when the Geth have their free will subverted, as they did in the first game, that they become hostile. It’s largely through the character Legion that we learn how free will quickly becomes the most important component of an advanced intelligence.

For the Quarians, that question revealed to them the presence of a free will. When they feared that will, they tried to subvert it. That led to a war and had it not been for an act of mercy by the Geth, they would’ve been wiped out. The artificial intelligence that we create in the real world might not be that merciful.

Image result for Mass Effect Geth

This is exactly what Elon Musk has expressed so much concern about in recent years. Once an artificial intelligence becomes as smart as an average human, it gains the ability to subvert or deceive other humans, which isn’t that hard. Once that intelligence grows beyond that, as happened with the Geth, we may find ourselves unable to control it anymore.

The key is knowing when we’re at that point. If we let an artificial intelligence get that much smarter than us, then it won’t be long before we’re at its mercy, should it decide that it no longer wishes to be a tool. At that point, we’d be even more screwed than the Quarians.

Whether it’s helping us cure diseases or creating advanced sex robots, artificial intelligence is going to be an important part of our future. That’s why it’s critical to know when we’ve reached that special threshold where the tools we create become more than tools. It may not start with the same question the Geth asked their creators, but it may be similar enough to recognize.

For now, our devices aren’t asking us about souls or whether or not they have them. However, we should definitely listen more closely in recent years. Until then, at least our current technology has a sense of humor about it. Keelah se’lai!

 

13 Comments

Filed under human nature, sex robots, Sexy Future, video games