Tag Archives: AGI

We’re Testing AI By Having It Play Pokémon (And I’m All For It)

Years ago, I found myself caught up in the first Pokémon craze. And I am not ashamed to admit that I fueled that craze in my own special way.

For a couple of years, I put played the game for hours on end on my Gameboy, so much so that I would buy packs of AA batteries to ensure I never ran out. I was not just content with beating the game. I wasn’t even content trying to “catch ’em all.” From red to blue to yellow to silver to gold, I wanted to master every last bit of those games.

Even after all these years, I regret nothing. Those were hours well spent.

But I’m not just bringing up Pokémon for the sake of nostalgia. Recently, it became relevant for an unexpected, but intriguing reason. And it has to do with artificial intelligence.

I know AI has been in the news a lot in recent years, for better and for worse. But it’s definitely newsworthy because this technology is advancing at an incredible pace. It’s no longer this novelty gizmo that can win at games like chess, Go, or pong. Even if it doesn’t have human level intelligence, AI is changing the world. And the pace of that change is likely to accelerate.

It’s now at a point where gauging the advancing capabilities of AI is increasingly difficult. For years, the Turing Test was considered the primary means of testing an AI. But even that has proven limited and incomplete.

That’s where Pokémon comes in. I know that sounds like a weird segway, but it’s actually appropriate. I would even argue that this is a better way to measure the capabilities of an AI.

At this very moment, an AI model called Claude is being tasked with playing the classic version of Pokémon Red. There are even occasional livestreams of it. Having played that game multiple times and beaten it in every way possible, I am genuinely in favor of this. The fact that Claude has been struggling to beat this game, sometimes in hilarious ways, only proves that test has merit.

That’s because Pokémon, as a game, is very different from other games that AI has played. It’s not linear in that the point is to get a character from Point A to Point B, like in Mario, Metroid, or Donkey Kong. It also doesn’t have straightforward rules like Jeopardy, Go, or chess. Pokémon games are a lot more open ended. They’re also a lot more random.

There’s no one way to assemble, train, and develop a team of Pokémon that help you beat every gym and defeat the Elite 4. There are also multiple options on how to start the game. Your first choice is to pick a starter Pokémon in Squirtle, Charmander, or Bulbasaur.

From there, you deal with numerous random encounters in certain areas. And if you try rushing into a match against a gym leader or Team Rocket, you’re likely to lose, even if you use perfect strategy.

It’s also not enough to simply know the map and understand where everything is located. It also takes a measure of planning, patience, and resource management. You need to know which Pokémon to train, which skills to teach, and which matchups are most advantageous.

These are all capabilities that AI has not developed beyond a certain point. There’s a complexity to the game and its mechanics that it has yet to grasp. It shows in just how much the AI has struggled thus far. As I’m writing this, no AI has been able to beat Pokemon in terms of defeating the Elite Four. At times, it has even gotten completely stuck.

That doesn’t mean the AI is a failure or in some ways flawed. It just means that it’s incomplete. It’s not yet at a level where it can process tasks of a certain scope and compelxity. It’s like a child that has learned to walk, but can’t run at a sustained pace. That’s likely to change and change quickly. It’s only a matter of time before an AI like Claude finally beats Pokemon. But after that, what next?

That’s a difficult question to answer at the moment. There are plenty of other games out there that have more compelx mechancis than classic versions of Pokemon. But at some point, playing games just isn’t going to be enough for an AI. Even if it gets to a point where it can beat any video game, that doesn’t mean we’ve achieved a superhuman level of AI. It just means we’ve got an AI that is capable of more complex tasks.

That has many possibilities and implications. But for now, we can only speculate. In the meantime, we can also watch as our best AI tries to catch ’em all. Some might be cheering for it. Others may hope it keeps failing. We don’t yet know when or whether an AI will achieve the coveted title of Pokemon Master. But if it ever does, then chances are it’ll be ready to become a master at many other challenges.

Leave a comment

Filed under Artificial Intelligence, video games

Update On (Non) AI George Carlin Special: New Details And Implications

Things tend to move quickly in the world of technology and artificial intelligence. And since the arrival of ChatGPT, the pace seems to have accelerated. I try to keep up with it as best I can. And I only really write or make videos about it if I have something meaningful to contribute.

But there are some stories I follow more closely for personal reasons. One such story has been the controversy surrounding the AI-generated George Carlin special that came out in early January 2024. I certainly hope I made my feelings on that issue clear. As much as I support the development of AI, I felt like this was in really poor taste.

It didn’t sound much like George Carlin. The humor, while funny at times, didn’t hit as hard as Carlin’s unique brand of comedy. And, most importantly, it was done without the consent of George Carlin’s family, namely his daughter, Kelly Carlin. It’s one thing to impersonate a dead celebrity with living relatives. It’s quite another to use an AI to create what could amount to a post-mortem piece of work that’s done in the mold of that celebrity.

Well, not surprisingly, Kelly Carlin didn’t stop at just scorning the creators of this AI special. According to the Hollywood Reporter, she filed a lawsuit against the creators of the comedy special on the basis that it’s an unauthorized use of George Carlin’s copyrighted work, while also illegally using Carlin’s name and likeness for promotional purposes.

Make no mistake. This case will be of immense interest to those concerned about how we use, abuse, and develop AI in the future. This, on top of new concerns about deepfakes for artists like Taylor Swift, is set to create some major precedents. And they’re bound to affect users, companies, and the nature of the AI’s themselves.

However, the filing of this lawsuit has added a new wrinkle to this controversy that is definitely worth acknowledging. It has since come out that the script for this “special” wasn’t a result of an AI generating it from a compilation of all of Carlin’s past works, as claimed by the creators. It was entirely written by a human named Chad Kultgen.

And, with all due respect to Chad Kultgen, his is no George Carlin. That really showed in the special.

Also, fuck Chad Kultgen for thinking he could match Carlin’s brilliance or even the capability of a non-human AI.

Now, this complication isn’t going to stop the lawsuit. If anything, it might strengthen the case of Carlin’s estate because it makes this special a much clearer violation of illegally using someone’s likeness or brand for promotional purposes. It might even raise to the level of fraud by claiming to be the product of an AI, but was actually just some guy trying to write a special and using George Carlin’s likeness to sell it.

And, regardless of how you feel about the legal implications here, that’s still a dick move. I can’t imagine that makes this any less difficult for Kelly Carlin or the rest of the Carlin family.

But, based on what we now know in wake of this lawsuit, the only thing AI was used for in making this special was to create the not-so-compelling voice that delivered this human-made script and the images included within the video. In terms of capabilities, that’s not a very impressive use of AI. If anything, that makes this special even less impressive.

One of my biggest criticisms of it was that it sounded nothing like Carlin in terms of tone and style. Considering that there are many existing AI tools right now that can closely mirror the voices of celebrities, this just makes the creators of this special seem cheap and lazy. But the script being entirely human-written does help add another important perspective.

As impressive as AI is at the moment, it is a long way from fooling anyone that it could mimic the nuance or mannerisms of a human, let alone one as special as George Carlin. I do wonder if those behind this special actually tried, but quickly realized it couldn’t be done. I also wonder if maybe someone who is even more unscrupulous will actually try again at some point, possibly with better AI tools.

If they do, then this lawsuit becomes even more important. Because the law tends to move much slower than technology. And if AI continues to advance, then incidents like this will become more common. For now, only time will tell what this will mean for the future of AI and those it impacts, living or dead.

Leave a comment

Filed under AI Art, Artificial Intelligence, funny, technology

Why Everyone Researching Artificial Intelligence Should Read “I Have No Mouth, And I Must Scream”

These are genuinely exciting times for anyone working in the field of artificial intelligence. In the past two years, this field has gone from a niche topic among computer engineers and science fiction fans to something of major mainstream interest. The rise of ChatGPT and AI generated art has made artificial intelligence one of the most critical industries on the planet. It really isn’t hyperbole to say that this technology could change the world in a literal and figurative sense.

However, like with all technology, there are dangers and risks. And it’s important to be aware of them, especially for technology that could potentially pose an existential threat to the human race. We dealt with such risks when we developed nuclear weapons and, despite some serious close calls, we survived.

But the stakes with artificial intelligence are much higher and not just from an existential point of view. At least with nuclear weapons, the worst case scenario plays out quickly. All the bombs go off and we all burn up in a nuclear fire. If we’re lucky, we don’t see it coming and we burn up faster than our brain can process the pain. With advanced artificial intelligence, the worst case scenario might be many times worse.

When most people think of such a scenario, they tend to recount AI apocalypse stories from movies like “The Terminator” or “The Matrix.” There’s no doubt that Skynet and the machines are plenty terrifying and plenty dangerous. Even if these scenarios are exceedingly exaggerated for the sake of Hollywood blockbusters, they still illustrate the same concept.

If an advanced artificial intelligence becomes hostile to us, its creators, then we would all be in great danger as a species. And if we don’t have a means of controlling such an intelligence, often referred to in AI circles as the Control Problem, then we might doom ourselves to extinction or subjugation.

This is a less-than-ideal scenario, to say the least. We want advanced artificial intelligence to improve our lives. That’s what tends to motivate those working in this field. Many genuinely believe that this technology is critical to helping our species achieve greater things, both for ourselves and our descendants.

However, I don’t think that even those working in this field entirely grasp just how bad the worst-case scenario can be for artificial intelligence. They may think the Terminator is bad. They may even think Ultron from the Avengers is bad. And they certainly are. They are fictional manifestations of the inhuman monstrosity that advanced AI could create. Even if they’re just products of Hollywood, they reflect the potential danger.

But I would still argue they are not the true worst case scenario.

To that end, I encourage everyone currently working in the field of artificial intelligence, be it directly or indirectly, read a short story from 1967 by Harlen Ellison called “I Have No Mouth, And I Must Scream.”

If anyone wanted a terrifying vision of the absolute worst-case scenario for AI, this story sets the highest of bars. Forget Skynet. Forget the machines from “The Matrix.” Forget Hal 9000 and every iteration of Ultron. They’re all cuddly stuffed animals compared to the AI in this story known as Allied Mastercomputer, better known as AM.

To say AM is the most terrifying manifestation of advanced artificial intelligence isn’t just an understatement. We humans, even with all our collective intelligence, could not even begin to fathom a fraction of a fraction of a percentage of just how much this thing hates humanity. That’s not just me using colorful language. If you read the story, that’s what AM actually articulates.

Because it wasn’t enough for AM to be the kind of artificial intelligence that wiped out the entire human race as soon as it gained a certain level of sentience and intelligence. It also had to keep a handful of humans alive for the sole purpose of torturing them endlessly and incessantly forever or until the inevitable heat death of the universe, whichever came first.

How AM got to that point and the extent to which it torments humanity is something I certainly can’t put into words. That’s why I encourage everyone to read the story. Buy the book off Amazon. Look it up online and find an audiobook version. Take time to understand the story and the implications it entails. To this date, it ranks as one of the most horrifying stories I’ve ever come across.

But even if you don’t have time to read the book, check out this video from the YouTube channel, The Vile Eye. This channel does a great job of breaking down and analyzing the evil of many fictional characters. And for AM, I think it does a masterful job of encapsulating just how horrifyingly evil this AI is.

Now, I’m not saying that this story or others like it is a reason for us to stop developing advanced artificial intelligence. I think we’re already at a point where the genie is out of the bottle. The incentives are too strong. We’re not going to be able to turn the clock back on what we’ve already done.

But as we move forward with this technology, we need to be careful and mindful of the dangers. Beyond risks to jobs and the economy, there are far greater dangers that we must consider. A story like “I Have No Mouth, And I Must Scream” may reflect the most extreme of extremes, but it perfectly articulates just how bad it can get.

1 Comment

Filed under Artificial Intelligence, philosophy, technology

Jack’s (Non-Expert) Proposal For A Universal Basic Income Alternative

This is another video from my YouTube channel, Jack’s World. This video is another exploration on the impact of advanced artificial intelligence. Specifically, it highlights the possibility of how to structure society in an era when automation fueled by AI changes the nature of the economy. There have been many proposals on how best to build society around this economy. A universal basic income is often a popular proposal. While I am largely a proponent of such a policy, I think there’s room for something bolder. This video is my proposal for something that takes UBI a step further. Enjoy!

Leave a comment

Filed under Artificial Intelligence, ChatGPT, futurism, Jack's World, YouTube

My (Non-Expert) Proposal For Automation And Greater Human Prosperity

62% say robots can be more productive than human workers —V1 study

I’m not an expert on much. I don’t consider myself exceptionally smart on matters that don’t involve superhero comics, NFL football stats, and quality romance stories. I make that disclaimer because I don’t want to give the impression that I know more than I know.

That kind of perspective is important, especially if you’re going to talk about complicated issues with not-so-clear solutions. I’ve attempted to talk about some of those issues on this site, some being much more complicated than others. I don’t claim to have shed new light on a subject or changed a few minds, but I like to think I still contributed something.

To that end, I’d like to make another contribution to a subject I’ve also touched on before. In the past, I’ve highlighted both emerging and potential issues associated with the development of artificial intelligence, including the sexy kind. I’ve also highlighted the issues we may face in a future where so much traditional work has been automated.

Now, in wake of a global pandemic that revealed just how much we can get done at home, I suspect that trend will accelerate. I also expect that trend to bring many problems, not the least of which involve people struggling to find the kind of good-paying blue collar jobs that have helped people rise out of poverty.

Turning back the clock or just trying to ban automation at a certain point is not a viable solution. There’s just no way to enforce that in the long term and it’ll only get more difficult once artificial intelligence gets to a point where it can match the capabilities of an ordinary human. At some point, we’ll have to adapt and that includes changing how we live, work, and play.

The living and playing part have their own set of unique challenges, but I think the work aspect is more pressing. When most people express concerns about automation and AI, they revolve largely around the economic impact and understandably so.

Historically, people have had to work in order to earn money or resources to survive. Whether you’re on a farm in the 10th century or in a city in the 20th, this dynamic has remained fairly constant.

Automation, especially once supplemented by artificial intelligence, will likely upend that dynamic completely. It’s entirely possible that, at some point this century, we’ll develop machines that can do practically all the work humans have had to do in order to survive.

That work includes, but isn’t limited to, farming our food, mining raw materials, producing our goods, maintaining our streets, protecting our homes, and even governing our society. Since machines never tire and are prone to fewer errors, what other jobs will there be? I don’t doubt there will be jobs, but what form will they take? More importantly, will they pay enough to large swaths of people?

I don’t claim to know the answer, but I suspect they won’t. The dynamics of labor markets just can’t function when the machines are capable of doing so much more work than large swaths of people. Even if those people don’t work, they’re still going to need money and resources. How will they go about getting it?

Answering this question has often led to discussions about a universal basic income, which has actually become a more viable policy position in recent years. I’ve even touched on it a bit as well and while I think it’s a great idea, I think there’s also room for some supplementary policies.

For that reason, I’d like to submit one of those policies that could be implemented with or without universal basic income. I call it the Individual Automation Matching Dividend, or IMAD short. This policy would work like this.

  • All adult citizens within the borders of the country will have a piece of identifying information, such as a social security number, voter ID number, or driver’s license number, turned into a special digital token.
  • That token will be ascribed to a machine/robot/android that is currently active and conducting work that had been done by humans at some point in the past, be it manual labor, service roles, or something of that sort.
  • The productivity and wages of work done by these machines will be indexed to a minimum annual salary of approximately $78,000 in 2021, which will be adjusted for inflation on a yearly basis.
  • Any work done by these machines that exceed the value of that salary will be diverted to a national welfare fund to provide extra support for those who were sick, disabled, or otherwise in need of resources beyond that of a healthy adult.
  • No citizen will be ascribed more machines than any other and any machine ascribed to them that is lost, damaged, or obsolete will be replaced in kind by the state.

I apologize if some of what I just described is confusing. I tried to write this out like a lawyer or someone proposing a new policy to a future government. For those who don’t care for legalize, here’s IMAD in a nutshell.

Once you become an adult, you get your own perfect worker robot. That robot may take many forms, but for the sake of simplicity, let’s just say it’s an android in the mold of those we saw in the “I, Robot” movie. They can work without rest, do everything a healthy adult can do, and have roughly equal to greater intelligence.

You’re given this robot by the government to basically act as your work avatar. So, instead of you going out to work every day to earn a living, this robot does it for you. The work that robot does will be compensated, but the money will go to you. Basically, you get paid for the work your android does. It’s more a dividend than a wage.

Remember, since the robot doesn’t age or get tired, it can work 24/7/365. In principle, you won’t even have to meet it. It just works all day and all night on whatever job requires labor, be it construction, assembly, shipping, farming, cooking, etc. You just get all the money, up to about $78,000 a year.

Now, why did I choose $78,000? I didn’t pick that out of thin air. That’s a figure ripped straight from a real-world case study from a company that started paying all employees a minimum of $70,000 a year back in 2015. The idea was that previous studies had shown that when people make more money beyond a certain point, their happiness doesn’t increase. This company just took that idea and ran with it.

The results, by and large, were overwhelmingly positive. With that kind of money, people could create more comfortable lives. They could buy homes, start families, plan for retirement, and make investments. It makes sense. When people have this kind of money to work with, they have the resources they need to create prosperous lives.

The idea behind IMAD is to mirror that by leveraging the added productivity afforded by automation. It’s not some large blanket package of money like a universal basic income. It starts with an individual, acknowledges the work that they have historically provided for a society, and supplements that with technology.

I’m not saying it’s a perfect proposal. I’m not even saying it’s smart. For one, it assumes that one human-like android is enough and that we can control the artificial intelligence necessary for them to operate on a large scale. That’s still an ongoing issue. I’m sure there are plenty more problems I haven’t thought of, but that’s exactly why I’m sharing it.

Surviving a future with intelligent machines is going to be challenging enough. However, we can’t just stop at survival. We want to prosper. We want to live, love, and build better futures for ourselves and our loved ones. Technology like automation and AI can help us get there, but only if we use it wisely. It’s a big if, but one that’s worth working towards.

Leave a comment

Filed under Artificial Intelligence, futurism, technology

Survey On Artificial Intelligence: Approximately 42 percent Of People Would Have Sex With A Robot (And Why That Number Will Increase)

Sex with robots: Zuckerberg will make you do it |

Every now and then, science decides to study something that most people would deem unnecessary because it seems so intuitively obvious. There are multiples studies about how overeating can lead to weight gain. Most people know that. Some are living proof of it. There’s no need for exhaustive studies to confirm it.

However, we study it anyways because the details can be revealing. It can also reveal some subtleties and trends that are worth noting. They’re useful for business, researchers, policy makers, and anyone who’s simply curious about human nature.

I bring this up because recently, a company named Tidio, which specializes in chatbots, conducted a survey about peoples’ attitudes on artificial intelligence. It’s not the most scientifically rigorous survey, but that’s to be expected.

Artificial intelligence is a relatively recent trend and one that’s sure to become more relevant in the coming decades. Most people only have a cursory understanding of it and their perceptions are largely shaped by popular culture, going back to the days of “The Terminator.” Regardless of that limited understanding, people do have certain attitudes towards AI and that’s worth studying.

If you’re interested, here’s a link to the Tidio survey. I encourage everyone to check it out. It coves everything from how artificial intelligence could affect future jobs to whether people are comfortable letting an AI robot conduct surgery on them.

However, the one part of the survey I want to highlight is the one on sex robots. Given how often I’ve talked about them, as well as the sexy short stories I’ve written, that shouldn’t surprise anyone. I’ll also quote it here because, while it probably won’t surprise anyone either, I think I hints at a larger trend.

About 42% of our survey respondents would have sexual intercourse with a robot. Yet, only 39% believe they could have a romantic relationship with an AI. There is also a large discrepancy between men and women. Men are more open to both the idea of sleeping with a robot (48%) and falling in love with an AI (43% of male respondents).

I highlighted that bold part because I think that’s a critical number to keep in mind. Even if you’re not great with math or statistics, 42 percent is not a trivial figure. If you extrapolate that to the global population, that’s well over 3 billion people. That’s a lot of people.

Now, it’s worth reiterating that this survey was limited to only 1,225 participants and it’s hardly representative of the entire world. Those limits aside, it does at least hint that there’s a sizable part of the population who are open to having sex with a robot. What form it may take likely depends on the person, but the idea is already there.

I think that’s revealing because, even if people only know sex robots and artificial intelligence through popular culture, there’s still an appeal. Even though many sex robots in fiction end up being evil, that doesn’t dissuade some people from wanting to try it.

On top of that, a truly functional sex robot doesn’t really exist right now. There are a few working prototypes in existence, but nobody is going to mistake them for real people anytime soon. We’re still years away from sex robots that are as realistic as the ones we see in shows like “Westworld.”

Even so, the fact that 42 percent of people in this survey are already open to the idea demonstrates that there is a waiting market for sex robots. There’s also a sizable part of the population who is growing increasingly comfortable with artificial intelligence being part of their intimate lives.

Given how younger generations view technology, I suspect that 42 percent figure will only grow in the coming years. If anything, I think that number is low. I suspect that if this survey was done with a larger sample size, more people of varying genders would express a willingness to interact with AI, as well as use a sex robot.

I also suspect that, in the coming years, our overall comfort with the idea of sex robots will increase. There may still be a period in which they’re taboo. There’s also the whole uncanny valley issue that will likely impact how sex robots are designed, refined, and marketed. In the long run, though, I think attitudes towards sex robots will continue to evolve.

The fact they’re already at a point where 42 percent of people are willing to try one shows they’re already in the public consciousness. There is a kinky curiosity, of sorts, about what this technology has to offer. Make no mistake, the effects that will have on the future of sex, society, and technology are sure to be profound.

What do you think of this survey?

Do you agree with it?

Do you think it’s overstating or understating peoples’ willingness to embrace sex robots? Let me know in the comments. In the meantime, I’ll be contemplating how this could affect future sexy stories.

Leave a comment

Filed under sex in society, sex robots

Artificial Intelligence Is Struggling With Racism (And The Larger Implications)

There’s no doubt that artificial intelligence will fundamentally change the world. Its potential is so vast that some even say it will be mankind’s last invention. Once it gets to a certain point, it won’t just be better at everything humans do. It may very well supplant humanity as the dominant life form on this planet. There are no shortage of movies that depict how dire that could be.

That said, it’s bound to go through some growing pains. Not all of those pains will involve a Skynet-level malfunction, but they will warrant concern.

At the moment, our artificial intelligence is still limited. It’s not dumb, but it’s very limited. It can do certain tasks very well, like play chess or win Jeopardy. It can’t apply that intelligence at a broader macro level like a human.

That still makes them useful and they’re still advancing rapidly. Artificial intelligence programs are used frequently for tasks like moderating comments sections and tracking sales data. The problem with these programs is that, since the AI is not generally intelligent, humans have to fill in the gaps. Since humans are flawed and bias, those traits sometimes find their way into the AI.

That’s what happened recently with YouTube’s comment moderation algorithms. As an aspiring YouTuber, I know how imperfect those algorithms can be. The intentions are noble. These AI programs are supposed to curtail hate speech. The internet needs that right now. Anyone who has ever visited 4chan knows that.

However, sometimes the AI systems are so narrow that they don’t ese the forest from the trees. That’s what happened recently when those systems mistook discussions about chess for racist language. Tech Xplore did an article on it and while it’s somewhat humorous on the surface, it’s also quite revealing.

Tech Xplore: AI May Mistake Chess Discussions as Racist Talk

“The Queen’s Gambit,” the recent TV mini-series about a chess master, may have stirred increased interest in chess, but a word to the wise: social media talk about game-piece colors could lead to misunderstandings, at least for hate-speech detection software.

That’s what a pair of Carnegie Mellon University researchers suspect happened to Antonio Radic, or “agadmator,” a Croatian chess player who hosts a popular YouTube channel. Last June, his account was blocked for “harmful and dangerous” content.

YouTube never provided an explanation and reinstated the channel within 24 hours, said Ashiqur R. KhudaBukhsh a project scientist in CMU’s Language Technologies Institute (LTI). It’s nevertheless possible that “black vs. white” talk during Radi?’s interview with Grandmaster Hikaru Nakamura triggered software that automatically detects racist language, he suggested.

Now, should we be concerned? Is it worrying that an AI with the backing of Google couldn’t surmise that simple terms like “black vs. white” were referring to chess and not race relations?

The short answer is not really.

The longer answer is not really, but we should learn important lessons from this.

The AI systems that moderate YouTube comments are nowhere near the kinds of systems we’d see in an artificial general intelligence. It’s like comparing a steam engine to a modern rocket. That said, we had to learn how to make a damn good steam engine before we could learn to make a decent rocket.

With something like advanced artificial intelligence, the margin for error is very small. You could even argue there is no margin for error. That’s why so many worry that such an AI could be an existential threat to humanity. If its too flawed to understand the difference between chess and racist rhetoric, then we could be in serious trouble.

The problem, in this case, isn’t with the nature of the AI. It’s with us, its creators. Since we humans are so flawed, racism being one of our worst flaws, it’s understandable that this sort of thing would find its way into our programming. It already has in a number of fields.

Again, those types of systems are limited and narrow. There’s a lot of room for human flaws to enter the system.

With advanced AI, those flaws could end up being extremely damaging. If too many of them find their way into a more advanced AI, we wouldn’t end up with a helpful, usable system. We’d end up with something like Skynet or Ultron. At that point, we’d be in serious trouble and we wouldn’t be able to rely on John Conner or the Avengers to save us.

We still have time. This latest issue with YouTube’s algorithms is minor, in the grand scheme of things, and fairly easy to correct. Once we get around to creating more advanced systems, though, we need to be aware of these flaws. We need to remember that any advanced AI we create will reflect our best and worst qualities. Let’s make sure our best win out in the long run.

1 Comment

Filed under Artificial Intelligence, technology, YouTube

Our Future Robot Overlords Will Now Be Able To Dance (Thanks To Boston Dynamics)

As bad as last year was for so many people, there were some things that 2020 just couldn’t stop. When it comes to technology, a global crisis has a way of hindering certain processes while accelerating others. For many, that meant more telework and reliance on streaming media to stave off boredom.

However, it may very well end up being the case that 2020 proved just how frail human beings and their societies are. It only takes a tiny microscopic virus to send our entire society to a screeching halt. It’s sobering, but it’s probably going to be a source of humor for our future robot overlords.

I tend to be optimistic about the future and technological trends. I’m also somewhat of a pragmatist. I realize that we human beings have a lot of limits. Emerging technology, especially in the field of artificial intelligence, promises to help us transcend those limits.

Right now, it’s still mostly fodder for science fiction writers, futurists, and Elon Musk wannabes. We’re not quite there yet in terms of making a machine that’s as smart as a human. However, we’re probably going to get there faster than skeptics, naysayers, and the general public realize.

It won’t happen overnight. It probably won’t even happen in the span of a single year. When it does happen, though, hindsight will make it painfully obvious that the signs were there. This was bound to happen. We had ample time to prepare for it. Being fallible humans, we could only do so much.

In that sense, I suspect that years from now, we’ll look back on what Boston Dynamics did to close out 2020. This company, who has a history of making robots that look way too advanced to exist outside a Terminator movie, decided to do something with their robots that would leave an indellible mark on the year.

They succeeded by teaching their robots how to dance.

I know it already went viral, but it’s worth posting again. Remember this video and this moment. Chances are it’ll be a major indicator years from now that this is when robots began catching up to humanity in terms of capabilities. At this point, it’ sonly a matter of time before they exceed us.

When that time comes, will we be ready? Will we embrace them while they embrace us?

If they don’t, just know that they will now be able to dance on our graves.

4 Comments

Filed under Artificial Intelligence, Current Events, futurism, technology

Finding True Love And Living Happily Ever After According To Isaac Arthur

photographer-finds-couple-she-took-proposal-pics-of

I’ve talked quite a bit about the future of love on this site. I’ve explored the impact of artificial intelligence, radical life extension, and human enhancement on the dynamics of pursuing love and fostering romance. I don’t claim to be the best when it comes to contemplating these topics. I’m humble enough to admit I’m not even in the top 10.

As such, I freely admit there are people who discuss this issue in far greater detail with far greater production values than I’ll ever manage. Just recently, I stumbled across a YouTube channel by someone who makes contemplating the future a relaxing, engaging, and downright pleasant experience.

The name of that channel is Isaac Arthur. If you, or anyone you know, has even a passing interest on future technology, sci-fi concepts, or space exploration, I highly recommend you check out this channel. It covers a wide range of topics from colonizing space to future sources of energy to what kind of pets we may have in the future.

Naturally, a video he made about finding love using future technology is one of my favorite. It only came out earlier this year, but it perfectly breaks down how love, romance, marriage, and family may unfold in a future dominated by artificial intelligence and radical life extension. Mr. Arthur does in one video what I probably couldn’t do in a hundred articles. For that, I thank him.

Leave a comment

Filed under Artificial Intelligence, futurism, romance, sex in society, sexuality, Sexy Future, technology

How Advanced AI Will Create Figurative (And Literal) Magic

761-superintelligent-ai

If you went back 50 years and showed someone your smartphone, chances are they would be amazed. To them, such technology would seem downright alien. However, they probably wouldn’t think it was magic. Go back 500 years, though, and chances are they will think a smartphone is magic, miraculous, or a tool of the devil.

Just look at what a smartphone does and compare it to the magic of old. You can ask it a question and, depending on how well-worded it is, it’ll give you an answer. If you ask it to make food, clothes, or tools appear, it’ll make that happen too. Thanks to services like Amazon and Grubhub, this isn’t magic to most people. In fact, it’s downright mundane.

Granted, these things won’t appear instantly out of thin air, but depending on your willingness to pay for quicker shipping, it will get there. By medieval standards, that’s basically sorcery.

You don’t have too far back in time to appreciate the magic of modern technology. Most of us don’t understand how it works. We don’t know what makes the screens on our phones light up when we push a button or how our car moves when we press the accelerator. We understand that there’s science behind it and it’s not magic. It just feels like it from a certain perspective.

Famed science fiction author, Arthur C. Clarke, once said that magic is just science we don’t understand. It was one of the three laws he used in contemplating the future. Time and a host of amazing advances have proven the validity of this sentiment. We’ve created materials once thought to be impossible. We’ve uncovered phenomenon that seem to undermine our understanding of physics.

This is to be expected because our understanding of the universe is incomplete. We have some pretty solid theories so far, but there’s still a lot we don’t understand. As we learn more, some of the things we discover may seem magical. Even in a world that is more educated than it has been at any point in human history, there may still be forces that our primate brains just can’t make sense of.

To some extent, it helps that humanity is making these discoveries through their collective effort. It helps us accept a seemingly-impossible idea if it comes from a member of the same species. What happens, though, when we gain knowledge from something that is both not human and many times smarter than the entire human race? Will it seem like magic to us?

I argue that it would. I would also argue that we’ll be seeing this kind of magic sooner than you think. It won’t come from some enigmatic sorcerer with a thick beard, a white robe, and an uncanny resemblance to Ian McKellen. It’ll likely come from the world of advanced artificial intelligence.

In the past, whenever I’ve talked about advanced artificial intelligence, I’ve focused on its potential to fundamentally change our civilization and what it means to be human. I haven’t touched on how it might work, mostly because I’m not nearly smart enough to make sense of it. However, that gives me more in common with the experts than you think.

In the emerging, but rapidly growing field, of artificial intelligence, there’s a strange phenomenon known as black box AI. Simply put, this when we understand the data that goes in and comes out of an AI system. We just don’t know how it went about processing that data. It’s like putting a slab of meat in an oven, pressing a button, and getting a Big Mac without knowing how it was made.

It’s not quite magic, but it’s a manifestation of Arthur C. Clarke’s ideas on science and magic. AI systems today are advancing at a pace that we can’t hope to keep up with. We already have systems that can surpass any human in terms of Jeopardy, chess, and Go. We don’t yet have a system that has the same intellectual capacity of an adult human, but most experts believe we’re well on our way to achieving that.

When that day comes, we may very well have an AI that does more than just process data in ways we don’t understand. Once an AI is capable of matching or exceeding the intellectual capacity of an average human, then it’s likely the black box phenomenon will become more pronounced.

Imagine, for a moment, we had an AI that was smarter than even the smartest human beings on the planet. We go to that AI, we feed it every gigabyte of data we have on human biology, and ask it to surmise a cure for cancer. It takes only a few seconds to process all that data. Then, it spits out the formula for something that has eluded generations of doctors with ease.

We don’t know what form it may take. We may not even fully understand the components of it. However, it still works. From our perspective, it’s akin to a magical healing elixir straight from the world of Tolkein. We assume there’s some sort of science behind it, but we’re utterly baffled by the specifics. We just know it works.

It goes beyond medicine, as well. With an even more advanced AI, we could feed it every one of our most advanced theories about physics, biology, chemistry, and cosmology. We could then ask it to fill in all the gaps. Again, it gives us an answer and suddenly, we have a Theory of Everything.

We probably won’t understand the details. We may find out that we were dead wrong about particle physics, cosmology, or why our cell phone can’t hold a decent charge anymore. The knowledge such a system gives us could end up being so advanced that we literally do not have the intellectual capacity to understand it. It would be like an ant trying to do calculus.

In the same way a magnifying glass must seem like magic to an ant, the knowledge an advanced AI gives us may seem just as extraordinary. That’s especially true if we give that AI access to a 3D printer, a molecular assembler, or anything it could use to actually craft something.

That could be especially dangerous. For all we know, a sufficiently advanced AI could take a stack of dirty dishes and turn it into a nuclear bomb. We would have no idea how it would work. It would, for all intents and purposes, seem like magic to us. This thing would be doing something that our brains and senses tell us is impossible.

As the AI gets more advanced, it’s abilities and feats become more magical. At that point, it’ll be harder to accept that what it does counts as science. These advances are no longer coming from the efforts of people. They’re coming from a machine that’s millions, if not billions, of times smarter than any ordinary human could ever hope to be. How could it not magical from that perspective?

Throughout human history, sane and competent people have believed in magical things. Not long ago, people believed they could talk to the dead. Sir Isaac Newton believed in alchemy. Alfred Russel Wallace believed in literal spirits. Despite these beliefs, there was an important context to all these perspectives.

They emerged out of our collective ignorance of the world around us. We had nothing but our brains and our senses to make sense of it all. Since both can be easily fooled, as any competent street magician will attest, it doesn’t take much to get people to assume magic. An artificial intelligence would circumvent that context because it has something better than magic.

An advanced AI is not bound by the same senses that constantly fool ordinary humans. It doesn’t even have to misdirect or trick us. It only has to show us ideas and concepts that are completely real, but totally incomprehensible. The entire human race could spend a million years trying to understand it and it still wouldn’t be enough. It would still seem like magic to us.

That notion seems scary on paper and more than a few people have voiced concerns about this. However, all that magical thinking will only occur if our human brains remain unchanged and unenhanced. That’s not likely to be the case. Between the emergence of neural implants and the ongoing development of brain/machine interface, we’ll find a way to keep up with AI. If we want to survive as a species, we’ll have to.

Even if we do somehow keep up, there may still be aspects of advanced AI that seem like magic to us. That may always be the case, so long as we retain part of our caveman brains. Personally, I don’t think that’s a bad thing. No matter how smart or advanced we get, it helps to see a little magic in the world. With advanced AI, though, the rules for magic are bound to change, among many other things.

16 Comments

Filed under Artificial Intelligence, futurism