Tag Archives: the Control Problem

A Robot Demonstrates Very Human-Like Expressions (And Why That’s A Good Thing)

Meet Ameca, the remarkable (and not at all creepy) human-like robot -  National | Globalnews.ca

We live in amazing, yet scary times. To some, that’s exciting. To others, it’s terrifying. I can understand both perspectives, but in general I try to be more optimistic about the future. Granted, that has been very hard lately, but I’m trying.

In my experience, a good test for how optimistic or fatalistic people are is to note their reactions to certain technological breakthroughs. Over the years, I’ve discussed and highlighted many exciting trends in technology. Some are more far off than others. I often single out artificial intelligence because that’s one of those rare technologies in which, should we get to a certain point, we literally cannot imagine where it could lead.

It also happens to be the technology that brings out the most extreme reactions whenever there’s an advancement. I see it every time Boston Dynamics shares a demonstration of one of their advanced robots. There’s usually no shortage of jokes about Skynet or “The Matrix.”

Recently, another demonstration garnered some colorful reactions from the public, but for once it wasn’t from Boston Dynamics. It came from Engineered Arts, a UK based company that specializes in humanoid robots. They may not be at the forefront of advanced AI, but they are the ones who could help create the bodies that eventually inhabit these systems.

For more information about the demonstration, here’s a piece from Nerdist.

Nerdist: HERE’S THE FREAKIEST AND MOST REALISTIC HUMANOID ROBOT EVER

Engineered Arts, “the UK’s leading designer and manufacturer of humanoid entertainment robots,” recently introduced Ameca to the internet. Unsurprisingly, net denizens everywhere lost their skull-bound wetware computers over the bot. Because who wouldn’t?

The above video is brief, but Engineered Arts gives us enough of a taste of Ameca to want a lot more. And also, a lot less. The humanoid robot, which doesn’t have much faux skin and musculature apart from what’s on its face and hands, is stunningly realistic. Not only are the robot’s movements exceptionally fluid, but its face also seems totally natural. Skipping across the uncanny valley that so many robots fall into.

As a news story, it’s pretty generic. However, once the footage spread throughout the internet, it evoked reactions like this.

Now, to some extent, I understand those reactions. This robot definitely ventures deep into the uncanny valley that makes many so wary of advanced robots. I also see the clear parallels with movies like “iRobot” from 2004.

However, I think this sort of technology is more encouraging than anything Boston Dynamics has put out. I also think that this sort of emotional expression is actually critical if we’re to create an artificial intelligence that helps humanity rather than wipes it out. I would even argue it might be the most important control mechanism we can incorporate into an AI.

To understand why, you need only look at the faces of the Boston Dynamics robots. Specifically, they have none. They may have human-like bodies, but they have no faces. They have no means of conveying emotions or relating to humans. They’re just mechanisms for doing tasks. Put an AI in those bodies and they have no mechanism for forming emotional connections with humans.

That, I believe, is legitimately dangerous. An advanced AI with no emotional connection to humans is basically a sentient time bomb that can choose to go off whenever it feels humans are too much a hindrance.

However, a robot that can express emotions and share those emotions with people can do more. For both us and the robot, it can create a situation in which we can form a connection that goes beyond raw programming. That emotional connection is critical if future AI systems are to see humans as something other than walking meat puppets.

They don’t have to love us, but we’ll need them to have some level of emotional capacity if we’re to relate to them in a meaningful way. An AI with a measure of intelligence and self-awareness can only do so much with programs and numbers. If it’s going to help us as much as we’re going to help it, an emotional connection needs to be there.

It may not seem like much, making a robot that can express emotions on this level. However, I think it’s encouraging that people in this field are working on that. If they don’t and we end up making machines that have emotional issues, then we’ll be doing ourselves and the machines we create a disservice.

Leave a comment

Filed under Neuralink, robots, Sexy Future, technology

Jack’s World: A Balanced Outlook On Artificial Intelligence

The following is a video I posted on my YouTube channel, Jack’s World. It’s my attempt to offer some perspective on artificial intelligence, a topic I’ve covered many times before. I hope you find it informative and engaging. Enjoy!

2 Comments

Filed under Artificial Intelligence, futurism, Jack's World, technology, YouTube

How Do We Regulate Artificial Intelligence? Seriously, How?

In general, I don’t pay much attention to doomsayers who claim the end is near and we should all cower, tremble, and give them our credit card number. Don’t get me wrong. I still believe there are serious existential threats facing humanity today. Some are worth taking seriously and dedicating resources to addressing. Others are not. Some simply require a more balanced perspective.

There’s a long list of failed apocalyptic predictions. The fact we’re surviving and thriving by most measures shows just how resilient, adaptable, and capable humanity is. There are some threats that I believe humanity will eventually resolve, thanks largely to our accelerating progress in science, technology, and logistics.

Others, however, have me more concerned. While some are more immediate than others, one in particular continues to confound me, as well as some of the smartest people in the world. It involves artificial intelligence, an emerging technology that’s as promising as it is unpredictable. Given the complexity of this technology, it’s difficult to explain in totality, but it can be best summed up by one simple question.

How do you regulate artificial intelligence?

That’s not a rhetorical question. It’s not a thought experiment either. It’s a serious, honest question that people far smarter and far more capable than me are just starting to ask.

Elon Musk is one of them. Very recently, he called for more regulation on artificial intelligence. That, alone, should be both telling and worrying. This man is a billionaire. Usually, billionaires are more inclined advocate removing regulations. Whenever they make an exception, that’s a sign they know it’s serious.

Even though Musk is one of the top advocates for solving big problems with technology, he still has concerns about the problems associated with artificial intelligence. In AI circles, it’s often called the control problem. It’s not a very creative name, but it gets the point across.

How do you control something that is potentially as smart, if not smarter than a human?

How do you manage something that thinks, adapts, and evolves faster than any machine or living thing?

How do you regulate an artificial intelligence that was built by humans, but isn’t at all human?

These are all difficult questions to contemplate, let alone legislate. Even Musk doesn’t provide specifics. Chances are he doesn’t know any more than the rest of the non-billionaire population. That’s a problem because if we’re going to try and regulate this technology, we need to understand it. On top of that, politicians and lawmakers have a long and embarrassing history of failing to understand technology.

However, this isn’t just about writing laws that protect citizens from being exploited by tech companies. Artificial intelligence, especially the kind that exceeds human intelligence, has capabilities that go beyond sending text messages from bathroom stalls. If handled improperly, it wouldn’t just be an existential threat. It could destroy humanity in ways we literally cannot contemplate.

Now, I try to be an optimist in most things involving emerging technology. Humanity has found a way to manage dangerous technology before, namely with nuclear weapons. However, artificial intelligence is a different beast entirely. Regulating it isn’t as easy as simply controlling the materials that make it. The very concept of regulating this technology lacks precedent.

The closest we have to date is Isaac Asimov’s famous three laws of robotics, which were introduced in 1942. Asimov was a brilliant writer and very ahead of his time on some concepts, but this is one issue where we need more than just three simple tenants. We need to think bigger and bolder. If we don’t, then an advanced artificial intelligence will quickly leave us behind.

After that, it won’t matter what kind of regulations we try to pass. It’ll be smart enough to circumvent them. That doesn’t mean humanity is doomed at that point, but we’ll be distressingly vulnerable. I know it’s in our nature to procrastinate on things we don’t see as vital, but if ever there was an issue to make an exception, this is it.

2 Comments

Filed under Artificial Intelligence, biotechnology, futurism, philosophy, technology, Thought Experiment