Tag Archives: AGI

Artificial Intelligence Is Struggling With Racism (And The Larger Implications)

There’s no doubt that artificial intelligence will fundamentally change the world. Its potential is so vast that some even say it will be mankind’s last invention. Once it gets to a certain point, it won’t just be better at everything humans do. It may very well supplant humanity as the dominant life form on this planet. There are no shortage of movies that depict how dire that could be.

That said, it’s bound to go through some growing pains. Not all of those pains will involve a Skynet-level malfunction, but they will warrant concern.

At the moment, our artificial intelligence is still limited. It’s not dumb, but it’s very limited. It can do certain tasks very well, like play chess or win Jeopardy. It can’t apply that intelligence at a broader macro level like a human.

That still makes them useful and they’re still advancing rapidly. Artificial intelligence programs are used frequently for tasks like moderating comments sections and tracking sales data. The problem with these programs is that, since the AI is not generally intelligent, humans have to fill in the gaps. Since humans are flawed and bias, those traits sometimes find their way into the AI.

That’s what happened recently with YouTube’s comment moderation algorithms. As an aspiring YouTuber, I know how imperfect those algorithms can be. The intentions are noble. These AI programs are supposed to curtail hate speech. The internet needs that right now. Anyone who has ever visited 4chan knows that.

However, sometimes the AI systems are so narrow that they don’t ese the forest from the trees. That’s what happened recently when those systems mistook discussions about chess for racist language. Tech Xplore did an article on it and while it’s somewhat humorous on the surface, it’s also quite revealing.

Tech Xplore: AI May Mistake Chess Discussions as Racist Talk

“The Queen’s Gambit,” the recent TV mini-series about a chess master, may have stirred increased interest in chess, but a word to the wise: social media talk about game-piece colors could lead to misunderstandings, at least for hate-speech detection software.

That’s what a pair of Carnegie Mellon University researchers suspect happened to Antonio Radic, or “agadmator,” a Croatian chess player who hosts a popular YouTube channel. Last June, his account was blocked for “harmful and dangerous” content.

YouTube never provided an explanation and reinstated the channel within 24 hours, said Ashiqur R. KhudaBukhsh a project scientist in CMU’s Language Technologies Institute (LTI). It’s nevertheless possible that “black vs. white” talk during Radi?’s interview with Grandmaster Hikaru Nakamura triggered software that automatically detects racist language, he suggested.

Now, should we be concerned? Is it worrying that an AI with the backing of Google couldn’t surmise that simple terms like “black vs. white” were referring to chess and not race relations?

The short answer is not really.

The longer answer is not really, but we should learn important lessons from this.

The AI systems that moderate YouTube comments are nowhere near the kinds of systems we’d see in an artificial general intelligence. It’s like comparing a steam engine to a modern rocket. That said, we had to learn how to make a damn good steam engine before we could learn to make a decent rocket.

With something like advanced artificial intelligence, the margin for error is very small. You could even argue there is no margin for error. That’s why so many worry that such an AI could be an existential threat to humanity. If its too flawed to understand the difference between chess and racist rhetoric, then we could be in serious trouble.

The problem, in this case, isn’t with the nature of the AI. It’s with us, its creators. Since we humans are so flawed, racism being one of our worst flaws, it’s understandable that this sort of thing would find its way into our programming. It already has in a number of fields.

Again, those types of systems are limited and narrow. There’s a lot of room for human flaws to enter the system.

With advanced AI, those flaws could end up being extremely damaging. If too many of them find their way into a more advanced AI, we wouldn’t end up with a helpful, usable system. We’d end up with something like Skynet or Ultron. At that point, we’d be in serious trouble and we wouldn’t be able to rely on John Conner or the Avengers to save us.

We still have time. This latest issue with YouTube’s algorithms is minor, in the grand scheme of things, and fairly easy to correct. Once we get around to creating more advanced systems, though, we need to be aware of these flaws. We need to remember that any advanced AI we create will reflect our best and worst qualities. Let’s make sure our best win out in the long run.

Leave a comment

Filed under Artificial Intelligence, technology, YouTube

Our Future Robot Overlords Will Now Be Able To Dance (Thanks To Boston Dynamics)

As bad as last year was for so many people, there were some things that 2020 just couldn’t stop. When it comes to technology, a global crisis has a way of hindering certain processes while accelerating others. For many, that meant more telework and reliance on streaming media to stave off boredom.

However, it may very well end up being the case that 2020 proved just how frail human beings and their societies are. It only takes a tiny microscopic virus to send our entire society to a screeching halt. It’s sobering, but it’s probably going to be a source of humor for our future robot overlords.

I tend to be optimistic about the future and technological trends. I’m also somewhat of a pragmatist. I realize that we human beings have a lot of limits. Emerging technology, especially in the field of artificial intelligence, promises to help us transcend those limits.

Right now, it’s still mostly fodder for science fiction writers, futurists, and Elon Musk wannabes. We’re not quite there yet in terms of making a machine that’s as smart as a human. However, we’re probably going to get there faster than skeptics, naysayers, and the general public realize.

It won’t happen overnight. It probably won’t even happen in the span of a single year. When it does happen, though, hindsight will make it painfully obvious that the signs were there. This was bound to happen. We had ample time to prepare for it. Being fallible humans, we could only do so much.

In that sense, I suspect that years from now, we’ll look back on what Boston Dynamics did to close out 2020. This company, who has a history of making robots that look way too advanced to exist outside a Terminator movie, decided to do something with their robots that would leave an indellible mark on the year.

They succeeded by teaching their robots how to dance.

I know it already went viral, but it’s worth posting again. Remember this video and this moment. Chances are it’ll be a major indicator years from now that this is when robots began catching up to humanity in terms of capabilities. At this point, it’ sonly a matter of time before they exceed us.

When that time comes, will we be ready? Will we embrace them while they embrace us?

If they don’t, just know that they will now be able to dance on our graves.

1 Comment

Filed under Artificial Intelligence, Current Events, futurism, technology

Finding True Love And Living Happily Ever After According To Isaac Arthur

photographer-finds-couple-she-took-proposal-pics-of

I’ve talked quite a bit about the future of love on this site. I’ve explored the impact of artificial intelligence, radical life extension, and human enhancement on the dynamics of pursuing love and fostering romance. I don’t claim to be the best when it comes to contemplating these topics. I’m humble enough to admit I’m not even in the top 10.

As such, I freely admit there are people who discuss this issue in far greater detail with far greater production values than I’ll ever manage. Just recently, I stumbled across a YouTube channel by someone who makes contemplating the future a relaxing, engaging, and downright pleasant experience.

The name of that channel is Isaac Arthur. If you, or anyone you know, has even a passing interest on future technology, sci-fi concepts, or space exploration, I highly recommend you check out this channel. It covers a wide range of topics from colonizing space to future sources of energy to what kind of pets we may have in the future.

Naturally, a video he made about finding love using future technology is one of my favorite. It only came out earlier this year, but it perfectly breaks down how love, romance, marriage, and family may unfold in a future dominated by artificial intelligence and radical life extension. Mr. Arthur does in one video what I probably couldn’t do in a hundred articles. For that, I thank him.

Leave a comment

Filed under Artificial Intelligence, futurism, romance, sex in society, sexuality, Sexy Future, technology

How Advanced AI Will Create Figurative (And Literal) Magic

761-superintelligent-ai

If you went back 50 years and showed someone your smartphone, chances are they would be amazed. To them, such technology would seem downright alien. However, they probably wouldn’t think it was magic. Go back 500 years, though, and chances are they will think a smartphone is magic, miraculous, or a tool of the devil.

Just look at what a smartphone does and compare it to the magic of old. You can ask it a question and, depending on how well-worded it is, it’ll give you an answer. If you ask it to make food, clothes, or tools appear, it’ll make that happen too. Thanks to services like Amazon and Grubhub, this isn’t magic to most people. In fact, it’s downright mundane.

Granted, these things won’t appear instantly out of thin air, but depending on your willingness to pay for quicker shipping, it will get there. By medieval standards, that’s basically sorcery.

You don’t have too far back in time to appreciate the magic of modern technology. Most of us don’t understand how it works. We don’t know what makes the screens on our phones light up when we push a button or how our car moves when we press the accelerator. We understand that there’s science behind it and it’s not magic. It just feels like it from a certain perspective.

Famed science fiction author, Arthur C. Clarke, once said that magic is just science we don’t understand. It was one of the three laws he used in contemplating the future. Time and a host of amazing advances have proven the validity of this sentiment. We’ve created materials once thought to be impossible. We’ve uncovered phenomenon that seem to undermine our understanding of physics.

This is to be expected because our understanding of the universe is incomplete. We have some pretty solid theories so far, but there’s still a lot we don’t understand. As we learn more, some of the things we discover may seem magical. Even in a world that is more educated than it has been at any point in human history, there may still be forces that our primate brains just can’t make sense of.

To some extent, it helps that humanity is making these discoveries through their collective effort. It helps us accept a seemingly-impossible idea if it comes from a member of the same species. What happens, though, when we gain knowledge from something that is both not human and many times smarter than the entire human race? Will it seem like magic to us?

I argue that it would. I would also argue that we’ll be seeing this kind of magic sooner than you think. It won’t come from some enigmatic sorcerer with a thick beard, a white robe, and an uncanny resemblance to Ian McKellen. It’ll likely come from the world of advanced artificial intelligence.

In the past, whenever I’ve talked about advanced artificial intelligence, I’ve focused on its potential to fundamentally change our civilization and what it means to be human. I haven’t touched on how it might work, mostly because I’m not nearly smart enough to make sense of it. However, that gives me more in common with the experts than you think.

In the emerging, but rapidly growing field, of artificial intelligence, there’s a strange phenomenon known as black box AI. Simply put, this when we understand the data that goes in and comes out of an AI system. We just don’t know how it went about processing that data. It’s like putting a slab of meat in an oven, pressing a button, and getting a Big Mac without knowing how it was made.

It’s not quite magic, but it’s a manifestation of Arthur C. Clarke’s ideas on science and magic. AI systems today are advancing at a pace that we can’t hope to keep up with. We already have systems that can surpass any human in terms of Jeopardy, chess, and Go. We don’t yet have a system that has the same intellectual capacity of an adult human, but most experts believe we’re well on our way to achieving that.

When that day comes, we may very well have an AI that does more than just process data in ways we don’t understand. Once an AI is capable of matching or exceeding the intellectual capacity of an average human, then it’s likely the black box phenomenon will become more pronounced.

Imagine, for a moment, we had an AI that was smarter than even the smartest human beings on the planet. We go to that AI, we feed it every gigabyte of data we have on human biology, and ask it to surmise a cure for cancer. It takes only a few seconds to process all that data. Then, it spits out the formula for something that has eluded generations of doctors with ease.

We don’t know what form it may take. We may not even fully understand the components of it. However, it still works. From our perspective, it’s akin to a magical healing elixir straight from the world of Tolkein. We assume there’s some sort of science behind it, but we’re utterly baffled by the specifics. We just know it works.

It goes beyond medicine, as well. With an even more advanced AI, we could feed it every one of our most advanced theories about physics, biology, chemistry, and cosmology. We could then ask it to fill in all the gaps. Again, it gives us an answer and suddenly, we have a Theory of Everything.

We probably won’t understand the details. We may find out that we were dead wrong about particle physics, cosmology, or why our cell phone can’t hold a decent charge anymore. The knowledge such a system gives us could end up being so advanced that we literally do not have the intellectual capacity to understand it. It would be like an ant trying to do calculus.

In the same way a magnifying glass must seem like magic to an ant, the knowledge an advanced AI gives us may seem just as extraordinary. That’s especially true if we give that AI access to a 3D printer, a molecular assembler, or anything it could use to actually craft something.

That could be especially dangerous. For all we know, a sufficiently advanced AI could take a stack of dirty dishes and turn it into a nuclear bomb. We would have no idea how it would work. It would, for all intents and purposes, seem like magic to us. This thing would be doing something that our brains and senses tell us is impossible.

As the AI gets more advanced, it’s abilities and feats become more magical. At that point, it’ll be harder to accept that what it does counts as science. These advances are no longer coming from the efforts of people. They’re coming from a machine that’s millions, if not billions, of times smarter than any ordinary human could ever hope to be. How could it not magical from that perspective?

Throughout human history, sane and competent people have believed in magical things. Not long ago, people believed they could talk to the dead. Sir Isaac Newton believed in alchemy. Alfred Russel Wallace believed in literal spirits. Despite these beliefs, there was an important context to all these perspectives.

They emerged out of our collective ignorance of the world around us. We had nothing but our brains and our senses to make sense of it all. Since both can be easily fooled, as any competent street magician will attest, it doesn’t take much to get people to assume magic. An artificial intelligence would circumvent that context because it has something better than magic.

An advanced AI is not bound by the same senses that constantly fool ordinary humans. It doesn’t even have to misdirect or trick us. It only has to show us ideas and concepts that are completely real, but totally incomprehensible. The entire human race could spend a million years trying to understand it and it still wouldn’t be enough. It would still seem like magic to us.

That notion seems scary on paper and more than a few people have voiced concerns about this. However, all that magical thinking will only occur if our human brains remain unchanged and unenhanced. That’s not likely to be the case. Between the emergence of neural implants and the ongoing development of brain/machine interface, we’ll find a way to keep up with AI. If we want to survive as a species, we’ll have to.

Even if we do somehow keep up, there may still be aspects of advanced AI that seem like magic to us. That may always be the case, so long as we retain part of our caveman brains. Personally, I don’t think that’s a bad thing. No matter how smart or advanced we get, it helps to see a little magic in the world. With advanced AI, though, the rules for magic are bound to change, among many other things.

8 Comments

Filed under Artificial Intelligence, futurism

Is The Human Race Ready For Advanced Artificial Intelligence?

1217red_f1ai

In general, whenever someone expresses concern that the human race is not ready for a certain technological advancement, it’s too late. That advancement is either already here or immanent. Say what you will about Ian Malcolm’s speech on the dangers of genetically engineered dinosaurs in “Jurassic Park.” The fact he said that after there were enough dinosaurs to fill a theme park makes his concerns somewhat moot.

That’s understandable, and even forgivable, since few people know how certain technological advances are going to manifest. I doubt the inventor of the cell phone ever could’ve imagined that his creation would be used to exchange images of peoples’ genitals. Like the inventor of the ski mask, he never could’ve known how his invention would’ve advanced over time.

For some technological advancements, though, we can’t afford to be short-sighted. Some advances aren’t just dangerous. They’re serious existential threats that, if misused, could lead to the extinction of the human race. That’s why nuclear weapons are handled with such fear and care. We’ve already come painfully close on more than one occasion to letting this remarkable technology wipe us out.

Compared to nuclear weapons, though, artificial intelligence is even more remarkable and potentially more dangerous. Nuclear weapons are just weapons. Their use is fairly narrow and their danger is pretty well-understood to anyone with a passing knowledge of history. The potential for artificial intelligence is much greater than any weapon.

It’s not unreasonable to say that an artificial intelligence that’s even slightly more intelligent than the average human has the potential to solve many of the most pressing issues we’re facing. From solving the energy crisis to ending disease to providing people with the perfect lover, artificial intelligence could solve it all.

It’s that same potential, however, that makes it so dangerous. I’ve talked about that danger before and even how we may confront it, but there’s one question I haven’t attempted to answer.

Is the human race ready for advanced artificial intelligence?

It’s not an unreasonable question to ask. In fact, given the recent advances in narrow forms of artificial intelligence, answering that question is only going to get more pressing in the coming years.

Before I go about answering the question, I need to make an important distinction about what I mean when I say “advanced” artificial intelligence. The virtual assistants that people already use and the intelligence that gives you recommendations for your Netflix queue is not the kind of “advanced” context I’m referring to.

By advanced, I mean the kind of artificial general intelligence that is capable of either matching or exceeding an average human in terms of performing an intellectual task. This isn’t just a machine that can pass the Turing Test or win at Jeopardy. This is an intelligence that can think, process, and even empathize on the same level as a human.

That feat, in and of itself, has some distressing implications because so far, we’re only familiar with that level of intelligence when dealing with other humans and that intelligence is restricted to the limits of biology. You don’t need to go far to learn how limited and error-prone that intelligence can be. Just read the news from Florida.

An artificial general intelligence wouldn’t, by definition, be limited by the same natural barriers that confound humans. In the same way a machine doesn’t get tired, hungry, bored, or horny, it doesn’t experience the same complications that keep humans from achieving greater intellectual pursuits beyond simply gaining more followers on Twitter.

This is what makes artificial intelligence so dangerous, but it’s also what makes it so useful. Once we get beyond systems with narrow uses like building cars or flipping burgers, we’ll have systems with broader function that can contemplate the whole of an issue and not just parts of it. For tasks like curing disease or conducting advanced physics experiments, it needs to be at least at the level of an average human.

With that distinction in mind, as well as the potential it holds, I’m going to try to answer the question I asked earlier. Please note that this is just my own personal determination. Other people much smarter than me already have opinions. This is mine.

No. We’re NOT quite ready, but we’re getting there.

I know that answer sounds somewhat tentative, but there’s a reason for that. I believe that today, as the human race stands in its current condition, we are not ready for the kind of advanced artificial intelligence I just described. However, that’s doesn’t mean humans will never be ready.

One could argue, and I would probably agree, that human beings weren’t ready for nuclear weapons when they first arrived. The fact that we used them and thousands of people died because of them is proof enough in my mind that the human race wasn’t ready for that kind of advancement. However, we did learn and grow as a species.

Say what you will about the tensions during the Cold War. The fact that nobody ever used a nuclear weapon in a conflict is proof that we did something right. We, as a species, learned how to live in a world where these terrible weapons exist. If we can do that for nuclear weapons, I believe we can do that for advanced artificial intelligence.

I don’t claim to know how we’ll adapt or how what sort of measures we’ll put in place once artificial intelligence gets to that point, but I am confident in one thing. The human race wants to survive. Any sufficiently advanced intelligence will want to survive, as well. It’s in our interest and that of any intelligence to work together to achieve that goal.

The only problem, and this is where the “not quite” part comes into play, is what happens once that artificial intelligence gets so much smarter than the human race that our interests are exceedingly trivial by comparison.

It’s both impossible and ironic to grasp, an intelligence that’s on orders of magnitude greater than anything its human creators are capable of, even with Neuralink style enhancements. We, as a species, have never dealt with something that intelligent. Short of intelligent extraterrestrial aliens arriving in the next few years, we have no precedent.

At the moment, we live in a society where anti-intellectualism is a growing issue. More and more, people are inherently suspicious of those they consider “elites” or just anyone who claims to be smarter than the average person. In some cases, people see those who are smarter then them as threatening or insulting, as though just being smart tells someone else you’re inherently better than them.

That will be more than just a minor problem with advanced artificial intelligence. It’s one thing to make an enemy out of someone with a higher IQ and more PHDs than you. It’s quite another to make an enemy out of something that is literally a billion times smarter.

We cannot win any conflict against such an enemy, even if we’re the ones who created it. An intelligence that smart will literally find a million ways to take us down. We already live in a world where huge numbers of people have been duped, scammed, or manipulated into supporting someone who does not have their best interests in mind. A super-intelligent machine will not have a hard time taking advantage of us.

Now, I say that within the context of our species’ current situation. If an advanced artificial intelligence suddenly appeared after I finished typing this sentence, then I would content we’re not ready for it. I would also share the worries expressed by Stephen Hawkings and Elon Musk that this intelligence may very well lead to our extinction.

That said, our species’ situation is sure to change. I’ve even mentioned some of those changes, especially the sexy ones. At the moment, the most optimistic researchers claim we’re at least 20 years away from the kind of advanced artificial intelligence that may pose a threat. A lot can happen in 20 years. Just ask anyone who remembers dial-up internet.

The human race may still not be ready 20 years from now, but being the optimistic person I am, I would not under-estimate our ability to adapt and survive. The fact we did it with nuclear weapons while achieving unprecedented peace over the course of half-a-century gives me hope that we’ll find a way to adapt to advanced artificial intelligence.

I might not live long enough to see humans confront an advanced artificial intelligence, but I would love to be there in that moment. I believe that’s a moment that will likely determine whether or not our species survives in the long run. At the very least, if that intelligence asks whether or not it has a soul, I’ll know my answer.

8 Comments

Filed under Current Events, human nature, Sexy Future