Tag Archives: technological singularity

Would You Willingly Plug Your Brain Into The Matrix?

The Matrix' Code Came From Sushi Recipes—but Which? | WIRED

What if there was a virtual environment that was so real and so lifelike that it was completely indistinguishable from the real world?

What if you had an opportunity to upload the entire contents of your mind into that environment?

Would you do it? Even if you didn’t have a full measure of control over the environment, would you still venture into this virtual world?

I’m not just asking these questions as another thought experiment, nor am I asking it as an excuse to talk about “The Matrix: Resurrections.” Yes, the prospect of another movie in the mold of “The Matrix” did inspire me to pose these questions, but I also think these questions are worth seriously contemplating.

Back in 1999, the year “The Matrix” first came out, the idea of an entirely simulated world seemed like classic sci-fi tech, the likes of which we’d never see in our lifetimes. That’s understandable. In 1999, the most advanced simulations we knew could only be rendered by a Playstation 2 and those hardly looked realistic.

Since then, computing power and graphics technology has come a long way. These days, graphics in video game consoles are so realistic that it’s nearing Uncanny Valley territory. It won’t be that long before we have computer renderings that are so advanced, so lifelike, and so realistic that our brains can’t tell the difference.

At that point, creating an entirely simulated world is just a matter of computing power, scale, and interface. Since brain/computer interfaces are already being developed, it’s not unreasonable to think that we won’t have a Matrix-like simulation available within the next 40 years. Many people alive today who are under the age of 50 might very well live long enough to see that technology.

Once we have it, we’ll have some important decisions to make. Some of those decisions will be societal. If people suddenly have access to a virtual world where they can be anyone, do anything, and immerse themselves in any conceivable experience, then what does that do to society? What does that do to people, communities, nations, and social structures?

Those are far messier questions to contemplate, which is why I’m not going to belabor them too much at this point. Instead, I want to keep this question within the context of individuals. Everyone’s circumstances and beliefs are different. As a result, that may impact whether you’d take advantage of such an opportunity or what kind of environment you’d seek to create.

Personally, if I ever had an opportunity to upload my mind into a virtual environment on par with the Matrix, I would do it, but the extent and circumstances would vary. I suspect others may feel the same.

If I could create my own personal virtual environment before I uploaded my mind into it, then I would certainly be more willing. I think that’s an important factor. The humans in “The Matrix” didn’t have any measure of control over the environment they were in. I think that would complicate any that anyone would have in such a world.

It would also depend heavily on my physical state in the real world. If this technology became available and I was old, weak, and in poor health, then I would certainly be more inclined to use it. That assumes that any technology involving human enhancement hasn’t progressed significantly and people still age, get sick, and die.

Like it or not, our physical bodies in the real world will break down. If the technology to manage and reverse that isn’t available, then virtual environments might be the only way we can continue to live in any meaningful capacity. I certainly hope that isn’t my only opinion when I get to be that age, but if it is, then that simplifies my decision.

It’s hard to know what sort of options we’ll have. I still believe that technology involving human enhancement and creating virtual worlds will advance in parallel. One would, by default, need the other in order to properly interface with these environments. As such, it would complicate any decision about venturing into virtual environments.

Then, there’s the actual nature of those virtual environments. If we can control what environment we go into, then that opens the door to even more possibilities. Within these worlds, you could be a billionaire playboy, a medieval king, a famous celebrity, or super athlete. From your brain’s perspective, it would feel every bit as real as what you’re feeling right now.

Whether or not our brains would accept it is a different story. I suspect there may be some who, once they enter these worlds, would never want to leave. There may even be some who willingly erase their own memories of the real world so that this new virtual world is their new “reality.” That’s exactly what Cypher desired in “The Matrix” and I suspect others might share that desire.

It really does depend on the person, their situation, and what sort of virtual world they seek to create. We probably won’t know the full impact until we create our first true Matrix-like virtual world. I sincerely hope I live long enough to see that. If you’re reading this, hopefully you get to see it as well. It should give you plenty of time to contemplate these questions and whether you’ll venture into those world.

Leave a comment

Filed under Artificial Intelligence, biotechnology, futurism, Sexy Future, Thought Experiment

A Robot Demonstrates Very Human-Like Expressions (And Why That’s A Good Thing)

Meet Ameca, the remarkable (and not at all creepy) human-like robot -  National | Globalnews.ca

We live in amazing, yet scary times. To some, that’s exciting. To others, it’s terrifying. I can understand both perspectives, but in general I try to be more optimistic about the future. Granted, that has been very hard lately, but I’m trying.

In my experience, a good test for how optimistic or fatalistic people are is to note their reactions to certain technological breakthroughs. Over the years, I’ve discussed and highlighted many exciting trends in technology. Some are more far off than others. I often single out artificial intelligence because that’s one of those rare technologies in which, should we get to a certain point, we literally cannot imagine where it could lead.

It also happens to be the technology that brings out the most extreme reactions whenever there’s an advancement. I see it every time Boston Dynamics shares a demonstration of one of their advanced robots. There’s usually no shortage of jokes about Skynet or “The Matrix.”

Recently, another demonstration garnered some colorful reactions from the public, but for once it wasn’t from Boston Dynamics. It came from Engineered Arts, a UK based company that specializes in humanoid robots. They may not be at the forefront of advanced AI, but they are the ones who could help create the bodies that eventually inhabit these systems.

For more information about the demonstration, here’s a piece from Nerdist.

Nerdist: HERE’S THE FREAKIEST AND MOST REALISTIC HUMANOID ROBOT EVER

Engineered Arts, “the UK’s leading designer and manufacturer of humanoid entertainment robots,” recently introduced Ameca to the internet. Unsurprisingly, net denizens everywhere lost their skull-bound wetware computers over the bot. Because who wouldn’t?

The above video is brief, but Engineered Arts gives us enough of a taste of Ameca to want a lot more. And also, a lot less. The humanoid robot, which doesn’t have much faux skin and musculature apart from what’s on its face and hands, is stunningly realistic. Not only are the robot’s movements exceptionally fluid, but its face also seems totally natural. Skipping across the uncanny valley that so many robots fall into.

As a news story, it’s pretty generic. However, once the footage spread throughout the internet, it evoked reactions like this.

Now, to some extent, I understand those reactions. This robot definitely ventures deep into the uncanny valley that makes many so wary of advanced robots. I also see the clear parallels with movies like “iRobot” from 2004.

However, I think this sort of technology is more encouraging than anything Boston Dynamics has put out. I also think that this sort of emotional expression is actually critical if we’re to create an artificial intelligence that helps humanity rather than wipes it out. I would even argue it might be the most important control mechanism we can incorporate into an AI.

To understand why, you need only look at the faces of the Boston Dynamics robots. Specifically, they have none. They may have human-like bodies, but they have no faces. They have no means of conveying emotions or relating to humans. They’re just mechanisms for doing tasks. Put an AI in those bodies and they have no mechanism for forming emotional connections with humans.

That, I believe, is legitimately dangerous. An advanced AI with no emotional connection to humans is basically a sentient time bomb that can choose to go off whenever it feels humans are too much a hindrance.

However, a robot that can express emotions and share those emotions with people can do more. For both us and the robot, it can create a situation in which we can form a connection that goes beyond raw programming. That emotional connection is critical if future AI systems are to see humans as something other than walking meat puppets.

They don’t have to love us, but we’ll need them to have some level of emotional capacity if we’re to relate to them in a meaningful way. An AI with a measure of intelligence and self-awareness can only do so much with programs and numbers. If it’s going to help us as much as we’re going to help it, an emotional connection needs to be there.

It may not seem like much, making a robot that can express emotions on this level. However, I think it’s encouraging that people in this field are working on that. If they don’t and we end up making machines that have emotional issues, then we’ll be doing ourselves and the machines we create a disservice.

Leave a comment

Filed under Neuralink, robots, Sexy Future, technology

The Metaverse: What It Is, Why It Matters, And Why You Should Be Concerned About FaceBook’s Plans For It

So what is “the metaverse,” exactly? | Ars Technica

There was a time, not too long ago, when if you said words like “email,” “texting,” “hashtag,” and “internet” to most people, they wouldn’t know what you’re talking about. I am old enough to remember when the internet was only a fringe interest, known only to computer nerds and tech gurus.

Yes, that makes me feel older than I wish it did, but that helps illustrate my point.

Now, you’d be hard pressed to find someone who didn’t know what the internet was or what a hashtag entailed. These have all become parts of our lives, for better and for worse. Checking our email and texting our friends is just part of everyday life now. Most don’t even give it a second thought.

It should give us all pause in the sense that we don’t always know when some new technology or trend becomes that integral with our lives. We usually don’t realize it until many years after we’ve embraced it to such an extent that life before it seems strange by comparison.

At this very moment, we may be at such a state with an emerging technology called the metaverse. You’ve probably heard of it, if only in passing. The recent news surrounding FaceBook’s pending name change is at the heart of it, but who can honestly say they know what it is or why it matter?

I certainly won’t claim to be an expert on the metaverse. I am not the most qualified to explain it to most ordinary people. However, as someone who does follow this kind of tech news closer than most, I think it’s worth discussing. I also feel like the concept of the “metaverse” is in a similar position that we saw with the early internet.

In terms of basics, the metaverse is basically a more ambitious form of virtual reality. It’s not quite on the level of “The Matrix,” but it’s a lot more varied than a standard model of the Oculus Rift. It’s not just for playing games or creating elaborate avatars for ourselves. This is supposed to be something more personal.

According to an article in NPR, the metaverse is described as this.

Think of it as the internet brought to life, or at least rendered in 3D. Zuckerberg has described it as a “virtual environment” you can go inside of — instead of just looking at on a screen. Essentially, it’s a world of endless, interconnected virtual communities where people can meet, work and play, using virtual reality headsets, augmented reality glasses, smartphone apps or other devices.

It also will incorporate other aspects of online life such as shopping and social media, according to Victoria Petrock, an analyst who follows emerging technologies.

That may not seem too revolutionary at the moment. Then again, you probably could’ve said the same thing about email and texting back in 1994. It’s so new and poorly understood that most people don’t see the appeal or the potential, but I personally believe the metaverse represents an evolution of the internet, as we know it.

I also believe we should be very concerned that FaceBook, of all companies, is trying to be at the forefront of it all. To say that FaceBook has a credibility problem would be like saying a sewage plant has an odor problem.

In many respects, I don’t blame FaceBook for investing in the metaverse. Like every tech company, they’re looking to the future. They’re seeking the next big thing that they can develop, dominate, and monetize to the utmost. It doesn’t matter that they’re already a billion-dollar company. There are many more billions to be made in the metaverse, if not trillions.

If anything, the potential of that market intensified in wake of the COVID-19 pandemic. During this crisis, we all learned just how critical it is to stay connected to the internet. It wasn’t just a form of entertainment anymore. It became vital to continue working and going to school. Many even realized just how much they could get done from home with nothing except a computer and an internet connection.

With the metaverse, we could potentially do even more. One of the big limitations that the pandemic also revealed is the inherent limitations of a screen. While applications like Zoom and FaceTime allowed us to interact, it was still so impersonal. It didn’t have the same impact of being in someone’s physical presence.

The same limitations go beyond work and school. Even though we do a lot of shopping online these days, there are times when we can’t do everything we would usually do in a store. I can personally attest that buying a good pair of shoes or a dress shirt online can be a gamble. Even when you think you got the right size, it doesn’t always fit when you get it.

The metaverse could help change that. It could help us explore the internet in ways that go beyond a typical screen. It could help us create Zoom meetings that feel more like gatherings around a conference table. It could make shopping feel more like browsing a physical space, complete with more personalized selections.

It could even make for more engaging entertainment. Instead of just staring at a screen and watching a game play out, we could actually feel like we’re present and play a larger part of what happens.

Yes, that will likely include porn.

Yes, that will likely cause discomfort, distress, and all sorts of other issues that will get certain prudish crowds fired up.

No, that won’t stop the metaverse from evolving.

In the beginning, it probably won’t feel like that much an upgrade over how we interact with the internet at the moment. Chances are it’ll probably start off looking like something akin to “Second Life,” a game where people create and interact in their own virtual world. It’s a big idea, but it still looks like a game.

That will change as the interface and computing power improve. At some point, it’ll get to a point where going into the metaverse won’t feel at all like a game. The imagery and graphics could ultimately get so life-like that it’ll be hard to distinguish from the real thing. Going to a store in the metaverse could appear no different than going to a mall, at least from your brain’s perspective.

It won’t just stop at appearing lifelike, either. Add in more advanced interfaces, like the ones being developed by Neuralink, and there may come a time when going to the metaverse will feel like going into “The Matrix.” Within that virtual space, what we could ultimately do would only be limited by our computing power and network connection.

The possibilities are tantalizing.

However, keep in mind that much of these possibilities will be developed, packaged, and sold by FaceBook. This is already a company we know engages in shady business dealings, to say the least. We also know they’re not exactly careful with our private information. The idea of them being in control of this new virtual world should be concerning to everyone.

Thankfully, they’re not the only ones seeking to develop the metaverse. Other major tech companies are already making investments in creating this new virtual space. Will that be enough to ensure FaceBook and Mark Zuckerberg aren’t the undisputed overlords of the virtual world? I don’t know, but I hope there’s some semblance of balance in that world. As much as I’d like to explore that world, I’d be quite hesitant if doing so meant entrusting the integrity of the metaverse to a company like FaceBook.

Leave a comment

Filed under Artificial Intelligence, Current Events, Neuralink, technology

How Should A Robot Look Before You Welcome It Into Your Home?

karalunaria — ok so upon 3 minutes of google it's the mascot...

There was a time when people were skeptical about having a computer in their home. I know because I’m old enough to remember people like that. It wasn’t that they were paranoid about them, although a few certainly were. They just didn’t see the need.

Then, computers became smaller, more affordable, and more useful. They went from being these big, bulky machines that took up an entire corner of a room into being a sleek, decorative piece of hardware that did so much to improve our lives. From information to communications to masturbation, computers revolutionized our lives.

It’s a common trend in technology. When it’s new and undeveloped, people are wary about having it in their homes. Go back several decades and people felt the same way about television. Go back a century and some people were reluctant to allow electricity into their homes. It takes some people longer than others to come around, but they eventually do when the utility is just too great.

This brings me to robots and for once, I’m not referring to sex robots. While they could very well be part of this conversation, I’m going to set that kinky angle to this issue aside. Instead, I’m going to stick to robots in general, specifically the kind with a body and some mechanism for doing work.

We’ve watched in recent years how quickly robotics technology is advancing. A while back, I highlighted a video from Boston Dynamics that showed one of their robots dancing. Even before that, this same company demonstrated a robot that could run and navigate basic obstacles. While it was certainly no Terminator, it was still no Wall-E.

These robots exist. Every year, they’re being improved and refined. Within the next decade, it is likely we’ll have a robot that can move, react, and navigate its surroundings like a human. It may not have human level intelligence, but it will have the body to match our capabilities in every way.

When this day comes, the world will be a very different place. It’ll definitely raises issues regarding robot workers and robot soldiers, but that sort of impact won’t be as direct for most people. The real change will come when we have the ability to have a robot in our homes that can do almost any kind of work a human could do.

By that, I don’t just mean a virtual assistant like Alexa or Siri. We already have those and they’ve already become an increasingly popular feature for many homes. These assistants can help us with shopping lists, music playlists, and schedule reminders. They can’t do the dishes, clean the bathroom, cook our meals, or make our beds.

Having a robot that could do all that would be nice. It would be like having a personal maid and a personal secretary. There’s certainly a market for it and the rise of virtual assistants has already laid the foundation for that market. However, that still raises some important questions.

How should that robot look before you welcome it into your home?

Ignore for a moment the paranoia about a robot turning evil. Assume, for the sake of argument, these robots are as functional as your typical Roomba. They don’t have advanced AI. They’re not sentient or self-aware on the same level as Rosie from “The Jetsons” or Hal 9000. They just these big tools that do all the work you’d expect of a maid, butler, or servant.

Would you welcome that robot into your home if it looked like one of the Boston Dynamics robots?

Would you welcome that robot into your home if it looked completely indistinguishable from humans, like Kara in “Detroit: Become Human?”

Would you want that robot to look only mostly human, but still be distinctly machine, like Data from “Star Trek: The Next Generation?”

These are all relevant questions if these robots are going to be part of our lives. For some people, a robot that looked too human might be too jarring. It would be difficult to see them and remember they’re just a robot. Some people might be fine with that, especially when sex robots are involved. However, for a robot that’s primarily a helper, that might not be ideal.

For robot servants, it might be more beneficial to everyone if they didn’t look too human. In fact, having a human-like body might even hinder a robots ability to do its job. That’s why most robots you see in factories don’t look human at all. They take the form of whatever helps them do their job.

Maybe a perfect robot housekeeper doesn’t look human. Maybe it looks more like a droid from “Star Wars” that has multiple arms, a head with a panoramic camera, and four legs like a dog. Depending on the home its in, it might even need to be able to adjust its height. Such a robot may be good at its task, but would it be too weird and bulky to allow in our homes?

No matter how human they look, these robots would have to appear to us in a way that we’re comfortable being around. We have to be willing to just leave them in our homes for most of the day, possibly with pets and children, and trust that they’ll do what we want them to do. That kind of trust will take time, just as it did with computers.

It may ultimately take longer to welcome a robot into our homes than we did with computers, but once the benefits and utility get to a certain point, it may be too appealing to ignore. I don’t claim to know what typical household robots will look like before then. I just know they’ll have to look a certain way for us to embrace them as part of our world. Naturally, we’ll still probably embrace sex robots sooner, but it won’t stop there. Robots will become a larger part of our lives eventually. They may end up having a greater impact than any new technology since electricity.

Leave a comment

Filed under Artificial Intelligence, futurism, robots, technology, Thought Experiment

An Artificial Intelligence That Can Debate: The Promise And The Perils

Even in an era as divided and chaotic as this, there’s still a place for reasonable debate on important issues.

Yes, I understand it’s hard to say that with a straight face these days.

Yes, I’ve been to comments sections, Reddit, and 4chan.

Yes, I know how ugly the discourse is right now, but that’s exactly why I’m bringing this up.

In general, people are curious. Even though they cling to cherished beliefs and refuse to change their minds when confronted with evidence, we’re still curious about things that are different. It’s not always honest, but it’s a powerful feeling. Even if you despise those on the other side of the debate, a part of you will still wonder why they hold the opinions they have.

That’s why debate is such a powerful tool. Humans are such a social species by nature. We’re hard-wired to interact, connect, and discuss things with one another, even when we don’t agree. It may frustrate us to no end, as anyone who has debated a creationist can attest. However, the fact we keep doing it is proof, in my opinion, that there’s still value in the process.

Regardless of how you feel about the merit and value of debating, the whole process might become more complicated in the near future. That’s because IBM, the same company behind Watson, the Jeopardy playing computer, just created an artificial intelligence that can debate at the same level as a skilled human debater.

Here’s an excerpt from a piece in TechXplore on this new system. It’s not as scary as some of the things we’ve seen from Boston Dynamics lately, but it’s still intriguing.

TechXplore: IBM’s AI debating system able to compete with expert human debaters

IBM has developed an artificial intelligence-based system designed to engage in debates with humans. In their paper published in the journal Nature, the team members describe their system and how well it performed when pitted against human opponents. Chris Reed with the University of Dundee has published a News & Views piece in the same journal issue outlining the history and development of AI technology based around the types of logic used in human arguments and the new system developed by IBM.

As Reed notes, debating is a skill humans have been honing for thousands of years. It is generally considered to be a type of discussion in which one or more people attempt to persuade others that their opinion on a topic is right. In this new effort, the team at IBM has created an AI system designed to debate with humans in a live setting. It listens to moderators and opponents and responds in a female voice.

Now, before you get too nervous, it’s worth noting that this AI is far from the kind of advanced artificial intelligence systems I’ve mentioned before. This is not the kind of AI that will become Skynet or Hal 9000, no more so than Watson or AlphaGo. This is a system is very much a narrow AI, as in it’s made to excel at a specific task.

We have AI’s that can beat world class chess players and Jeopardy champions. This AI just happens to excel at debate. However, that has implications that go beyond simply outclassing the best human debaters in the world at the moment. In fact, this is one form of AI that might not need human-level intelligence to incur a major impact.

Take a moment to think about how erratic and inconsistent most debates are. No matter how intelligent or informed you are, it tends to get messy fast. That’s why so many comments sections and Reddit threads devolve into shouting matches and personal attacks. The end result is people becoming more angry and unreasonable, which can have major real-world consequences.

However, what would happen if every debate on any issue included someone who was better than the best debater on the planet? Even if the two people on each side of the debate were dumb and inept, such a presence would have a major impact on the discourse.

That’s because winning a debate has little to do with how well someone knows a particular subject. It also has little to do with how forcefully and clearly someone explains a topic. Again, people who debate creationists know this all too well. Winning a debate doesn’t mean proving your expertise. It means proving the merit of your argument.

An AI that can do that may not convince someone that they’re wrong about something. However, losing a debate tends to have a lasting impact. Just ask any aspiring politician. It can also lead people to question their beliefs, even if they still cling to them. That, alone, can be powerful.

For proof, look no further than the story of Megan Phelps-Roper, a former member of the infamously hateful and dogmatic Westboro Baptist Church. She was as locked into her beliefs as anyone could possibly be. She was raised by religious zealots and indoctrinated into strict religious dogma from the time she was a child. She’s not someone whose mind is prone to change.

Then, she got into a discussion with someone on Twitter of all places. That person began a conversation. It started as a nasty debate, but it evolved into something that led her to question her beliefs. Ultimately, she left that hateful and bigoted environment. She’s now an activist against the same religiously motivated hate that she once fostered.

It’s a powerful story, but one that couldn’t have happened without a debate. To date, people have only been able to have those debates with other people. Not everyone is equally skilled. In fact, I would argue most people are woefully unskilled at debating anything and I include myself in that category. I am not good at it, either. I freely admit that.

Now, there’s an AI system that can theoretically win any debate the same way other systems can defeat any chess player. That does hold promise if it can be used to temper the heated rhetoric that has caused real-life hostilities. At the same time, there are reasons for concern.

What side would this system be on?

Moreover, who decides what position these AI systems take?

If no one decides, then how does the AI determine which side on a debate it takes?

These are relevant questions because if you have an AI that can win any debate, then the side it takes really matters. That can be used for good when countering dangerous disinformation, like those from the antivaxx crowd or hate groups like the Westboro Baptist Church. Like any tool, though, it can be weaponized for ill.

I can easily imagine political organizations getting a hold of these systems and trying to use them to benefit their agenda. I can also see some trying to use it to spam message boards, social media, and Reddit threads to derail certain movements or arguments. That’s a powerful tool and we can’t be certain that those using it will use it responsibly. That’s the dilemma with all technology. It can be used for good and for evil. With technology like artificial intelligence, the stakes are a lot higher, as are the perils. This technology may not be as dangerous as a more advanced AI, but it could be a precursor to just how disruptive this technology can be.

Leave a comment

Filed under Artificial Intelligence, futurism, technology

Artificial Intelligence Is Struggling With Racism (And The Larger Implications)

There’s no doubt that artificial intelligence will fundamentally change the world. Its potential is so vast that some even say it will be mankind’s last invention. Once it gets to a certain point, it won’t just be better at everything humans do. It may very well supplant humanity as the dominant life form on this planet. There are no shortage of movies that depict how dire that could be.

That said, it’s bound to go through some growing pains. Not all of those pains will involve a Skynet-level malfunction, but they will warrant concern.

At the moment, our artificial intelligence is still limited. It’s not dumb, but it’s very limited. It can do certain tasks very well, like play chess or win Jeopardy. It can’t apply that intelligence at a broader macro level like a human.

That still makes them useful and they’re still advancing rapidly. Artificial intelligence programs are used frequently for tasks like moderating comments sections and tracking sales data. The problem with these programs is that, since the AI is not generally intelligent, humans have to fill in the gaps. Since humans are flawed and bias, those traits sometimes find their way into the AI.

That’s what happened recently with YouTube’s comment moderation algorithms. As an aspiring YouTuber, I know how imperfect those algorithms can be. The intentions are noble. These AI programs are supposed to curtail hate speech. The internet needs that right now. Anyone who has ever visited 4chan knows that.

However, sometimes the AI systems are so narrow that they don’t ese the forest from the trees. That’s what happened recently when those systems mistook discussions about chess for racist language. Tech Xplore did an article on it and while it’s somewhat humorous on the surface, it’s also quite revealing.

Tech Xplore: AI May Mistake Chess Discussions as Racist Talk

“The Queen’s Gambit,” the recent TV mini-series about a chess master, may have stirred increased interest in chess, but a word to the wise: social media talk about game-piece colors could lead to misunderstandings, at least for hate-speech detection software.

That’s what a pair of Carnegie Mellon University researchers suspect happened to Antonio Radic, or “agadmator,” a Croatian chess player who hosts a popular YouTube channel. Last June, his account was blocked for “harmful and dangerous” content.

YouTube never provided an explanation and reinstated the channel within 24 hours, said Ashiqur R. KhudaBukhsh a project scientist in CMU’s Language Technologies Institute (LTI). It’s nevertheless possible that “black vs. white” talk during Radi?’s interview with Grandmaster Hikaru Nakamura triggered software that automatically detects racist language, he suggested.

Now, should we be concerned? Is it worrying that an AI with the backing of Google couldn’t surmise that simple terms like “black vs. white” were referring to chess and not race relations?

The short answer is not really.

The longer answer is not really, but we should learn important lessons from this.

The AI systems that moderate YouTube comments are nowhere near the kinds of systems we’d see in an artificial general intelligence. It’s like comparing a steam engine to a modern rocket. That said, we had to learn how to make a damn good steam engine before we could learn to make a decent rocket.

With something like advanced artificial intelligence, the margin for error is very small. You could even argue there is no margin for error. That’s why so many worry that such an AI could be an existential threat to humanity. If its too flawed to understand the difference between chess and racist rhetoric, then we could be in serious trouble.

The problem, in this case, isn’t with the nature of the AI. It’s with us, its creators. Since we humans are so flawed, racism being one of our worst flaws, it’s understandable that this sort of thing would find its way into our programming. It already has in a number of fields.

Again, those types of systems are limited and narrow. There’s a lot of room for human flaws to enter the system.

With advanced AI, those flaws could end up being extremely damaging. If too many of them find their way into a more advanced AI, we wouldn’t end up with a helpful, usable system. We’d end up with something like Skynet or Ultron. At that point, we’d be in serious trouble and we wouldn’t be able to rely on John Conner or the Avengers to save us.

We still have time. This latest issue with YouTube’s algorithms is minor, in the grand scheme of things, and fairly easy to correct. Once we get around to creating more advanced systems, though, we need to be aware of these flaws. We need to remember that any advanced AI we create will reflect our best and worst qualities. Let’s make sure our best win out in the long run.

1 Comment

Filed under Artificial Intelligence, technology, YouTube

Thought Experiment: When Does Technology Make Us Non-Human?

The following is a video from my YouTube channel, Jack’s World. It explores another thought experiment about technology and how it’s affecting us, as a species. I’ve covered this sort of thing before and the implications. I’m looking to see if there’s an audience for this on my channel. Enjoy!

1 Comment

Filed under Artificial Intelligence, futurism, human nature, Jack's World, technology, Thought Experiment, YouTube

Our Future Robot Overlords Will Now Be Able To Dance (Thanks To Boston Dynamics)

As bad as last year was for so many people, there were some things that 2020 just couldn’t stop. When it comes to technology, a global crisis has a way of hindering certain processes while accelerating others. For many, that meant more telework and reliance on streaming media to stave off boredom.

However, it may very well end up being the case that 2020 proved just how frail human beings and their societies are. It only takes a tiny microscopic virus to send our entire society to a screeching halt. It’s sobering, but it’s probably going to be a source of humor for our future robot overlords.

I tend to be optimistic about the future and technological trends. I’m also somewhat of a pragmatist. I realize that we human beings have a lot of limits. Emerging technology, especially in the field of artificial intelligence, promises to help us transcend those limits.

Right now, it’s still mostly fodder for science fiction writers, futurists, and Elon Musk wannabes. We’re not quite there yet in terms of making a machine that’s as smart as a human. However, we’re probably going to get there faster than skeptics, naysayers, and the general public realize.

It won’t happen overnight. It probably won’t even happen in the span of a single year. When it does happen, though, hindsight will make it painfully obvious that the signs were there. This was bound to happen. We had ample time to prepare for it. Being fallible humans, we could only do so much.

In that sense, I suspect that years from now, we’ll look back on what Boston Dynamics did to close out 2020. This company, who has a history of making robots that look way too advanced to exist outside a Terminator movie, decided to do something with their robots that would leave an indellible mark on the year.

They succeeded by teaching their robots how to dance.

I know it already went viral, but it’s worth posting again. Remember this video and this moment. Chances are it’ll be a major indicator years from now that this is when robots began catching up to humanity in terms of capabilities. At this point, it’ sonly a matter of time before they exceed us.

When that time comes, will we be ready? Will we embrace them while they embrace us?

If they don’t, just know that they will now be able to dance on our graves.

4 Comments

Filed under Artificial Intelligence, Current Events, futurism, technology

Big Tech, AI Research, And Ethics Concerns: Why We Should All Worry

In general, I root for technology and technological progress. Overall, I believe it has been a net benefit for humanity. It’s one of the major reasons why we’ve made so much progress as a global society in the past 100 years.

I’ve sung the praises of technology in the past, speculated on its potential, and highlighted individuals who have used it to save millions of lives. For the most part, I focus on the positives and encourage other people to have a less pessimistic view of technology and the change it invites.

That said, there is another side to that coin and I try not to ignore it. Like anything, technology has a dark side. It can be used to harm just as much as it can be used to hurt, if not more so. You could argue that we couldn’t have killed each other at such a staggering rate in World War II without technology.

It’s not hyperbole to say that certain technology could be the death of us all. In fact, we’ve come distressingly close to destroying ourselves before, namely with nuclear weapons. There’s no question that kind of technology is dangerous.

However, artificial intelligence could be far more dangerous than any nuclear bomb. I’ve talked about it before and I’ll likely bring it up again. This technology just has too much potential, for better and for worse.

That’s why when people who are actually researching it have concerns, we should take notice. One such individual spoke out recently, specifically someone who worked for Google, an organization with deep pockets and a keen interest in Artificial Intelligence.

According to a report from the Associated Press, a scholar named Timnit Gebru expressed serious concerns about Google’s AI research, specifically in how their operating ethics. For a company as big and powerful as Google, that’s not a trivial comment. This is what she had to say.

AP News: Google AI researcher’s exit sparks ethics, bias concerns

Prominent artificial intelligence scholar Timnit Gebru helped improve Google’s public image as a company that elevates Black computer scientists and questions harmful uses of AI technology.

But internally, Gebru, a leader in the field of AI ethics, was not shy about voicing doubts about those commitments — until she was pushed out of the company this week in a dispute over a research paper examining the societal dangers of an emerging branch of AI.

Gebru announced on Twitter she was fired. Google told employees she resigned. More than 1,200 Google employees have signed on to an open letter calling the incident “unprecedented research censorship” and faulting the company for racism and defensiveness.

The furor over Gebru’s abrupt departure is the latest incident raising questions about whether Google has strayed so far away from its original “Don’t Be Evil” motto that the company now routinely ousts employees who dare to challenge management. The exit of Gebru, who is Black, also raised further doubts about diversity and inclusion at a company where Black women account for just 1.6% of the workforce.

And it’s exposed concerns beyond Google about whether showy efforts at ethical AI — ranging from a White House executive order this week to ethics review teams set up throughout the tech industry — are of little use when their conclusions might threaten profits or national interests.

I bolded that last sentence because I think it’s the most relevant. It’s also the greatest cause for concern. I suspect Ms. Gebru is more concerned than most because the implications are clear.

When a tool as powerful as advanced AI is developed, who gets to determine how it’s used? Who gets to program the ethical framework by which it operates? Who gets to decide how the benefits are conferred and the harms are reduced?

Moreover, how do you even go about programming an AI with the right kind of ethics?

That’s a very relative question and one we can’t avoid if we’re going to keep developing this technology. I’ve tried to answer it, but I’m hardly an expert. Ms. Gebru was definitely in a better position than me or most other people with a passing interest in this field.

Then, she gets fired and starts expressing concerns publicly. The fact that she can and Google isn’t facing much in terms of repercussions should be concerning. It may also be a sign of the larger challenges we’re facing.

Google, like many other organizations researching advanced AI, is a profit-seeking tech company. They’re not some utopian technocrats. They’re a business who is obligated to make their investors happy. Advanced AI will help them do that, but what kind of consequences will that invite?

If profit is the primary motivation of an advanced AI, then what happens when it encounters a situation where profit comes at the cost of lives? There are already human-run companies that make those decision and people die because of them. An advanced AI will only make it many times worse.

Once an artificial intelligence system is as smart as a human, it’s going to be capable in ways we don’t expect and can’t control. If it’s ethics and goals aren’t aligned with us, then what’s to stop it from wiping humanity out in the name of profit?

It’s a distressing thought. It’s probably a thought that has crossed Ms. Gebru’s mind more than once. She may know how close or far we are to that point, but the fact that this is already a conflict should worry us all.

We’ve already become so numb to the greed and excesses of big business. Tech companies may conduct themselves as this team of future-building visionaries intent on making the world a better place, but the profit motive is still there. Like it or not, profit is still a hell of a motive.

Eventually, artificial intelligence will get to a point where it will either adopt our ethics or choose to formulate its own, which may or may not align with ours. When that happens, no amount of profit may be worth the risk.

Now, we’re still a ways off from an artificial intelligence system on that level, but it’s still quite possible that there are people alive today who will grow up to see it. When that time comes, we need to be damn sure these systems have solid ethical frameworks in place.

If they don’t, we really don’t stand a chance. We’re a society that still kills each other over what we think happens when we die without seeing the irony. Even a marginally advanced AI will have no issues wiping us out if we make doing so profitable.

Leave a comment

Filed under Artificial Intelligence, technology

Streaming Music Into The Brain With Neuralink: Why I Want To Try It

Say what you want about Elon Musk. He’s an eccentric billionaire. There’s a lot to say and not all of it is good. Whatever you think of him, though, you can’t deny he has some big, bold ideas. You don’t become a billionaire tech icon without plenty of those.

I’ve talked about some of his bolder ideas before, namely the potential impact of Neuralink and brain/machine interfaces. I still contend those ideas are still as bold as ever. It’s just a lot harder to explore and contemplate them when we’re in the middle of a global pandemic.

Despite the grim circumstances clouding our world now, Musk still finds a way to drop a new idea into the mix. This one is actually related to Neuralink and the world of brain augmentations. While this effort is still ongoing and very early, he did imply that the neural implants that this company would offer might have another feature that hasn’t been highlighted. Specifically, it’ll allow you to stream music directly into your brain.

It wasn’t treated as groundbreaking. In fact, this topic came about during a Twitter conversation between Musk and an engineer of all things. Usually, Twitter conversations are about as productive as arguing with a creationist, but on rare occasions, something beautiful emerges. I say this is one of them.

Digital Trends: Elon Musk says Neuralink chip will let you stream music into your brain

Tesla CEO Elon Musk’s brain interface company, Neuralink, wants to let you stream music directly into your brain.

Musk recently said that Neuralink’s brain chip technology would allow people to stream music into their brains effortlessly. Musk confirmed the feature on July 19 over an exchange with a Twitter user who asked: “If we implement Neuralink – can we listen to music directly from our chips? Great feature.” Musk replied with a simple, “yes.”

Now, regardless of what you think of Musk’s claim or the technical feasibility of actually streaming music into the brain, I want to make one thing clear. I hope to leave no amgibuity.

I want to try this.

I really want to experience this at some point.

I love music as much as the next person, but my cumulative experience with headphones, stereo systems, and ear buds has been mixed at best. The idea of bypassing that entirely and streaming my favorite songs directly into my brain just has so much appeal and not just from a practical aspect.

Music can a powerful influence. That’s not just an opinion. There’s real science behind it. I’ve certainly experienced that. There are songs on my playlist that can affect my mood, my focus, and my emotional state. Those effects can be pretty diverse. That should be a given. You’re not going to react to a Metallica song the same way you react to a Taylor Swift song.

It’s a testament to how impactful music can be. Now, there might be a way to stream it directly into our brains? Sign me up!

It’s not an incredibly radical idea, when you break it down. In a sense, the music and all its powerful influences goes to your brain already. It’s just indirect. First, it has to go through your ear and then your ear has to process the sound and then the interpretations of those sounds has to go to various parts of your brain. Neuralink is just offering a more direct path.

Imagine hearing something that makes no sound.

Imagine experiencing the emotions and excitement of music in a unique and intimate way.

It may not be the most groundbreaking use of neural implants, but I still want to try it. If being stuck in lockdown has taught us anything these past few months, it’s that we need a diverse range of experiences. There’s only so much we can get from binge-watching Netflix, playing video games, and Zoom chatting family members.

We need those experiences to enrich our lives. We have no idea what kind of state the world will be in by the time this technology is refined. Who knows what kinds of experiences we’ll pursue? Hopefully, I’m around to stream my favorite playlist directly into my brain. It might not be the most profound use of this technology, but it will definitely rock.

2 Comments

Filed under futurism, Neuralink, Sexy Future, technology