Tag Archives: Artificial Intelligence

Revealing Attitudes Towards AI Art (When People Don’t Know It’s AI Art)

When I was a kid, there was a general attitude towards any movie or TV show that was animated. Unless it was “The Simpsons,” then animation was largely associated with kids. Even when there was a concerted effort to appeal to older demographics, animation was still seen as less mature form of storytelling. Even when it was exceptionally well done, this stigma persisted.

Eventually, animation and animated content, in general, matured to a point where the stigma really doesn’t work anymore. Cartoons aren’t just for kids. We now have shows like “Bojack Horseman” and “Rick and Morty” that have helped change the popular perception of animation. On top of that, the added influence of Japanese anime, from “Dragonball Z” to “Ghost In The Shell,” have helped attitudes about animated content evolve even more.

This shift all happened within my lifetime. It’s a simple recognition that attitudes, perceptions, and trends are always changing. And they’re going to continue to change, regardless of how we might feel about certain issues now.

This brings me to AI generated art. It’s a topic that, depending on where you discuss it, can generate a lot of debate, disagreement, and consternation. I’ve been in more than one discussion with someone who claims AI generated art can only ever be a bad thing in the long run because it enables trolls, assholes, grifters, and bad actors.

I’ve also been in similar discussions with those who claim AI art isn’t even art. It’s, according to them, nothing more than a fancy calculator that crunches numbers on where to place certain pixels on a screen. While I think that’s a gross oversimplification, I don’t think it takes away from the end result. Whether it’s produced by a human or a computer, the end result is still the same. An artistic rendering is created. But regardless of what form it takes, we have certain attitudes about it.

As I write this, I think the prevailing sentiments toward AI art are negative. Show someone an AI generated image and they might concede that it looks nice. But they’ll still have an aversion to it, knowing it’s created by AI. And if you how them a human-generated piece of art, even if it doesn’t look nice, that same person will still ascribe more value to it than they would an AI generated image. That’s just where we are right now.

But in a recent study by Scientific Reports, an interesting insight was uncovered. When the researchers conducted a survey that included approximately 200 people, they found that when people weren’t aware that an image was AI generated, then their attitudes about it were more positive. In many cases, they could not discern between the artwork generated by humans and those generated by AI. But as soon as they were aware that something came from an AI, their sentiments changed.

It’s probably not too surprising, but it’s also revealing. It speaks to where we currently are in our perspectives on anything created by an AI. It still has this synthetic, uncanny valley feel to it. We still inherently ascribe more value to something that is created by a human over that of a machine. And while that is certainly understandable, given that human generated art requires more labor and passion, how much will that value persist in the coming years?

That’s not an unreasonable question because the quality of AI art has changed considerably in the past few years. And it’s likely to continue improving in ways we’re not ready for. Not too long ago, it was somewhat easy to discern when something was created by an AI. The issues it had with rendering hands and fingers are well-document. However, those issues have been addressed and improved considerably, especially with newer models.

As such, you don’t need to be a wide-eyed utopian futurist to predict that AI art generators will improve to a point where it’s genuinely difficult to tell if it was created by a human. This study already showed how close we are. The participants weren’t able to surmise on a surface level that they were dealing with a mix of AI and human-generated art. With improved tools, even the most discerning and tech savvy among us might not be able to figure it out.

At that point, our attitudes towards AI art will have to change. Think what you want about AI and whether it’s capable of creating real art, let alone appreciate what goes into it. But if it’s capable of making something we can’t discern from its human-made counterparts, then those negative sentiments we have are entirely arbitrary, as well as subjective.

And those same attitudes might not be shared by younger people who have grown up in an environment surrounded by the internet, AI, and AI generated content. They might not have any issue with seeing AI generated art as real art. And anyone who just despises AI art as a concept might find themselves in the same group as those who still think all cartoons are just for kids.

It’s hard to know where these trends will lead as AI technology advances. It’s already affecting how we see art, knowledge, and what it means to be intelligent. And how we feel today might not be at all in line with how future generations will feel in the decades to come. Only time will tell.

I’ll simply note that humans, in general, are a visual species. We seek to create, admire, and cherish art, regardless of the source. For that reason, I think there will certainly be a place for any kind of art, whether it comes from a human or an AI.

Leave a comment

Filed under AI Art, Artificial Intelligence, psychology, technology

How AI Art Could Upend/Destroy The Comic Book Industry

This is a video from my YouTube channel, Jack’s World.

This video covers some recent news surrounding accusations about major comic publishers utilizing AI art and the larger implications that this new technology will have on the industry. Artificial Intelligence promises to impact a lot of industries, but the comic industry is uniquely vulnerable. And it’s definitely worth confronting as the technology continues to improve.

1 Comment

Filed under AI Art, Artificial Intelligence, DC Comics, Jack's World, superhero comics, YouTube

How AI Companions Can Be Helpful AND Harmful

It is not easy finding that special someone. It never has been, even if you’re rich, beautiful, and have strong social skills. Every generation encounters new challenges in their pursuit of love and companionship. I know I have. I’ve even shared some of those struggles before.

At the moment, I have not found that special someone. I am single, I live alone, and I currently have no romantic prospects of any kind. I’m honestly not even sure where to begin at this point in my life. Nearly everyone else in my immediate family has settled down and started having kids. I am very much behind the curve in that regard.

However, there are some individuals who are in an even tougher situation. I know I am lucky in many regards. I own my own home. I own my own car. I’m never behind on my bills or anything. But there are plenty of men and women my age who have none of that. Some of the people I went to college with are still stuck in debt and can’t even afford to pursue a serious relationship.

In that sense, I don’t blame anyone for seeking other forms of companionship. Loneliness really does suck. It is objectively bad for your health. While it has become a hot topic, even in political fields, it has also led to some controversial trends. And among the most contentious involves people seeking AI companions.

Now, before I go any further, I want to make clear that I am somewhat reluctant to talk about this. While I’m usually up for any discussions about artificial intelligence, especially with how it may affect our love lives, this one specific aspect of AI is riddled with complications.

On one hand, there’s the general stigma. Most AI companions, such as Replika, are essentially idealized avatars of whatever kind of companion the user wants. If a man wants an AI girlfriend to look like a porn star and have hair like a girl from his favorite fantasy anime, he can have that and the personality to match. And while that is appealing, as a product, it still carries a stigma with it.

Men like this who use AI companions aren’t seen in a very sympathetic light. They’re more likely to be seen as examples of toxic male behavior. They’re not just lonely and in need of companionship. They’re seen as perverts who prefer a girlfriend that they can turn off, manipulate, or control in whatever way they please.

And make no mistake, there are men who treat their AI companions like that. They’re not all that subtle about it, either. But most of these men were shallow, crass, and crude before the advent of AI companions. They would’ve been that way regardless of whether or not this technology existed. There have always been men like that. And there always will be to some extent.

But there’s also a double standard at work with these men. Because there are AI companions for women too. They’re every bit as available as the ones men use. They just don’t get as much scrutiny and don’t carry as much stigma. If a woman were to create an AI companion to resemble their favorite male celebrity, chances are they won’t be stigmatized as much as their male counterparts.

Some may see this as concerning, thinking the woman must have issues if she was resorting to AI companions. But she would certainly garner less stigma than the man.

I would still argue there are women out there who seek AI companions for the same reason as men. They’ll even mold avatars meant to resemble the sexiest, most attractive figure they can conjure. I don’t claim to know how common it is, but I don’t doubt this exists.

Even with that kind of shallow use of this technology, I think it’s much more common that these users are just lonely. They seek companionship the same way most humans seek companionship. Even if there are plenty of people to interact with, AI companions help fill a particular need. That’s really all there is to it.

That’s not to say that AI companions are harmless. I strongly believe they can be. It just depends on the user and how they go about interacting with these AI systems.

If someone is manipulative, controlling, abusive, and self-centered, then having an AI companion that they can mold to their whims is not going to temper those tendencies. More than likely, they’ll get much worse. They’ll basically set a standard for the user that conditions them to expect certain qualities in a companion. And since real people can’t be molded, manipulated, or configured like an AI, they’ll never find someone who meets their impossible criteria.

In the process, that same user might grow bitter and angry that no real person can be to them what their AI companion is. And as these feelings simmer, it could just lead them into a destructive cycle of resenting everyone and everything that they can’t control the same way they control their AI companion.

That is very much a worst-case scenario for users of AI companions. I did try to look up research on this, but it was hard to come by. Both the stigma and novelty of these products make it difficult to assess. Maybe I’m being too hopeful, but I think cases like this are rare.

They certainly exist, but they’re the exception rather than the norm. It just tends to get more attention because seeing horrible people reinforce their horrible behavior with these AI companions is disturbing to many people and understandably so.

At the same time, I also believe that AI companions can be genuinely beneficial for a lot of people and those benefits are likely understated. Remember, we are social creatures. And as intelligent as we can be, we’re also blunt instruments with respect to certain mental faculties. Our brains and our psyche don’t care about the nature of social interaction. So long as we find it fulfilling on some levels, we’ll incur the benefits.

In their early form, AI companions probably didn’t offer much in that regard. But in recent years with the rise of AI chatbots and large language models, it’s relatively easy and cheap to create an AI that people can interact with in ways that closely resemble those of real humans. And the growing size of the AI companion industry is solid that there is growing market for this sort of thing.

But the good these AI companions could do goes further than simply giving people a facsimile of human interaction. Remember, the current crop of AI chatbots and LLMs are relatively new. They’re like the early models of the iPhone. They’re going to continue being refined, developed, and improved upon now that an industry is being built around it.

In time, AI chatbots and general AI technology will improve.

At some point, AI technology will get advanced to the point where it can offer more than just a base level interactions. In theory, an AI could be configured in way that didn’t just perfectly complement the personality of the user. It could also interact with them in a way that fosters healthy personal growth, just like any other good relationship.

There could even be AI companions specifically configured to deal with abusive men or women, helping them understand and deal with their issues in a way that makes them better individuals. That could be life-saving for certain people who struggle to find companionship due to issues like personal trauma or mental illness.

These AI companions don’t even need to take a physical form. They don’t need to be incorporated into sex robots or anything. They can still be effective as simple avatars on smart devices. There would certainly need to be some level of testing, safeguards, and refinement in order to make them work effectively. It might even take years before AI companions have such capabilities.

That’s the most I’m willing to say about AI companions at the moment. I don’t doubt this industry will continue to evolve in the coming years. I also don’t doubt there will be plenty of controversies about the ethics of these companions, as well as how they affect the user.

But even in their current form with their current level of intelligence, it offers lonely people an outlet. Reasonable people can argue just how healthy or unhealthy it is. But it doesn’t change the fact that lonely people are out there. They’re seeking connection and companionship like everyone else. These AI companions aren’t perfect replacements, but they’re better than nothing.

Leave a comment

Filed under Artificial Intelligence, futurism, men's issues, psychology, romance, sex in society, sex robots

Artificial Intelligence, The Entertainment Industry, And Their (Uncertain) Future

This is a video from my YouTube channel, Jack’s World.

This video is part speculation and part thought experiment on the impact of artificial intelligence and the entertainment industry. Since the WGA/SAG strikes of 2023, the impact of AI on entertainment is impossible to ignore. While the technology is still in a very early stage, we’re already seeing it affect the course of multiple industries, but entertainment might be the most profound.

What could this mean for consumers?

What could it mean for the companies, studios, artists, and workers that produce our entertainment?

It’s difficult to determine at this early stage, but I make an effort to imagine what artificial intelligence could mean for the future of entertainment.

1 Comment

Filed under Artificial Intelligence, movies, Neuralink, technology, YouTube

AI Chatbots May (Thankfully) Render Homework Obsolete

Homework sucks.

Let’s get that out of the way.

I doubt anyone will disagree with that sentiment. No matter who you are or how many years you’ve been out of school, you probably don’t miss doing homework. It’s one of those special shared hatreds reserved only for traffic jams, parking tickets, and slow internet. But unlike those undeniable frustrations, homework isn’t an inescapable force of nature or law. It’s something we, as a society, choose to continue.

I’ve certainly questioned that choice, going back to when I was still in school. Having to do homework was among the many reasons why I was so miserable in school. And even though it was required, I can’t honestly say it ever helped me learn anything. Most teachers and administrators often explained why it was important to ensure we were adequately learning the material. But as I’ve gotten older, I’ve come to realize that, even if that were a valid reason, it still was still ineffective.

Just ask yourself honestly. Did you ever do homework because you were curious and wanted to learn?

Now, I could rant and lament on why homework sucks for days on end. But rather than torture myself to such an extreme, I wanted to highlight something that might offer hope to those who still remember how much homework sucked, as well as those currently in school at this very moment. It has to do with the impact of artificial intelligence and chatbots like ChatGPT.

I know I’ve talked a lot about artificial intelligence in the past. I’ve also highlighted the impact and hype surrounding ChatGPT. It is definitely one of the most intriguing and disruptive technologies to come along in decades. But unlike other discussions about whether artificial intelligence and ChatGPT will lead to the destruction of the human race, this is one issue in which the impact is already happening.

Recently, Vox produced an intriguing video about how ChatGPT has impacted education, especially homework. Even as someone who graduated school years ago, I found the issues and insights of this video remarkable. I encourage everyone to check it out.

The long and short of it is simple. ChatGPT is rendering most homework assignments, be they essays or worksheets, obsolete. Students are using ChatGPT to basically do the bulk of the work for them. The only real effort they need to do is make sure that whatever they produce isn’t obviously the product of a chatbot.

That alone can be difficult. It is well-documented that chatbots like ChatGPT can be inaccurate. But when compared to having to do a long, boring assignment that a student probably isn’t interested in, that kind of challenge seems manageable by comparison.

Also, in the interest of full disclosure, I freely admit that I probably would’ve used ChatGPT when I was in school if I had access to it. I promise it wouldn’t be entirely out of laziness or an unwillingness to learn. I just found most homework assignments to be so dull and pointless that I cared more about just getting them done rather than actually learning anything.

I imagine I’m not the only one who feels this way. I suspect the majority of students simply see homework as a means of ensuring grades rather than actually learning something. And even if that assumption is flawed, it’s still an issue that speaks to major flaws in how we educate ourselves and others.

And until ChatGPT, it was easy to ignore that issue. Schools, teachers, and administrators had no reason to stop giving homework or question whether it was an effective tool. It was just one of those things that our education system had always done. So, why not keep doing it?

Well, now there’s a valid reason. Homework, as we know it, can be easily completed by any student with an internet connection. If there was any learning potential, it’s pretty much lost. As the Vox video stated, it has led schools and educators to consider an entirely new recourse.

The knee-jerk response that I suspect most will adopt is to try and ban or limit the use of chatbots. There are software programs out there that can help detect content that has been generated by a chatbot. However, I liken these programs to using scotch tape to seal the ever-widening cracks of a faulty foundation.

Because, like it or not, these AI chatbots are becoming more advanced. And the tools to keep up with them are always going to lag behind. That is a losing race and one no education system should attempt.

There’s even precedent for surmising why that’s a bad approach. When I was in college, there was a blanket ban on using Wikipedia. But enforcing that ban was a losing battle that caused more problems than it solved. It also created some nasty situations where students were accused of plagiarism when they did nothing of the sort. It took a few high-profile incidents, but most schools eventually came to embrace Wikipedia as a useful tool when approached correctly.

I think the impact of chatbots will have to go through a similar process. But unlike Wikipedia, the application of chatbots is a lot broader. These are tools that can effectively summarize books, write essays, and even write poetry with a few simple prompts. And in the same way young people have become more tech savvy than their parents, I suspect they’ll become more adept than most with respect to navigating chatbots.

That means homework, as we’ve been doing it for the past several decades, will be obsolete. While that’s certainly cause for celebration for many, it’s also an opportunity to take a step back and evaluate the process of education, as a whole.

It’s still very important that we educate young people in a meaningful way.

It’s also important to acknowledge that young people today will have access to resources that others have not.

If homework is no longer useful in that regard, what else could we do? What’s a more effective way to teach kids a concept, even when they’re not that motivated to learn it?

I don’t claim to know the answers. I am not a teacher, but I do remember how miserable I was in school and how little it really taught me. Hopefully, the impact of chatbots like ChatGPT will prompt a more thorough evaluation of how we approach education. Because if we keep clinging to old methods, then nobody will benefit in the long run, especially kids.

1 Comment

Filed under Artificial Intelligence, ChatGPT, technology

My Thoughts, Opinions, And (Mixed) Feelings About AI Art

This is a video from my YouTube channel, Jack’s World.

This video explores my thoughts, feelings, and overall sentiments surrounding AI art. You might have noticed that some of my more recent videos have been incorporating more AI art lately. I do so with the understanding there is a fair amount of controversy and concern surrounding this technology. I do my best to address that while offering what I hope is a balanced perspective on AI art and the use of generative AI. Enjoy!

Leave a comment

Filed under AI Art, Artificial Intelligence, Jack's World, technology, YouTube

Neuralink Has Implanted Its First Brain Chip (But Don’t Get Excited)

I know it’s been a while since I talked about Neuralink. A few years ago, I followed news about this endeavor closely because I genuinely believed that brain/computer interfaces was an important technology to develop. These days, I still believe that. I think it has only become more vital as the progress in artificial intelligence has accelerated.

However, there’s a reason why I haven’t talked much about Neuralink in quite a while. I admit some of that is because some pretty significant world events got in the way of following technology trends. However, a much bigger part of my disinterest came from the one behind Neuralink, Elon Musk.

The long and short of it is this. I once respected and admired Elon Musk for his efforts to develop new and emerging technology. But then, once I started scrutinizing him, his background, his business practices, and his tendency to make ridiculous promises that he can never keep, I came to the conclusion that Elon Musk isn’t just an unscrupulous businessman. He’s an asshole.

Believe me, there’s a lot I could say about him, but I prefer not to. Insulting someone with billions of dollars and access to high-priced lawyers is risky these days. I’ll just say that the combination of COVID-19 and buying Twitter didn’t really break Musk. It just exposed the kind of person he really is and that person is not one worthy of respect.

Now, with respect to Neuralink, there is still some real value behind the company, the efforts, and the technology. Integrating the human brain with technology isn’t just a promising field. It might very well be necessary if humans are to adequately adapt to a future in which there will likely be machines much smarter and more capable than any person who has ever lived. Even if AI never becomes as intelligent as an average human, having humans that can interact with it on a more intimate level could fundamentally change our species and our society for the better.

And to that end, Neuralink has officially taken a critical first step. According to NPR, the company has successfully implanted its first brain chip into a human participant. We don’t know many details, nor do we know the identity of the person who received the chip. We only know what was conveyed in the announcement, which is rarely something you can take at face value with Elon Musk.

But even if you don’t trust Musk, and you shouldn’t, this feat has been in the works for a while. Neuralink has been actively recruiting volunteers for implants for over a year now. And this effort was authorized by the United States Food and Drug Administration. The primary participants, at least for now, are those who have suffered brain or spinal chord injuries. So, it’s not like Neuralink is accepting applications from those who just want a brain chip for no reason.

That approach is to be expected. Even Musk has said that the initial efforts with Neuralink will focus on helping paraplegics or those suffering from conditions like ALS. If and when the technology matures, then it’ll expand access to other users, but still for therapeutic purposes. Eventually, it would get to a point where brain implants aren’t just treatments. They would be like smartphones that people willingly purchase and have implanted.

And it’s that last part where I believe proponents of this technology should temper their hopes. Until we know more about the patients, the nature of the brain implants, and the impact on the participants, nobody should be eager to get one themselves. These brain implants are not about to become the next iPhone. This is a technology that needs much more investment, refinement, and development.

And if history is any guide, you really shouldn’t trust someone like Elon Musk to deliver that level of advancement. Despite the persona he tries to convey, he is not Tony Stark. He is not a brilliant scientist or inventor. He just hires those kinds of people and has a nasty habit of screwing them over.

He is, and always has been, a salesman first. He may very well be a genuine futurist trying to make a better future, but his tendency to get into petty feuds on social media and say objectively dumb things should give everyone pause. But at the end of the day, he’s an obscenely wealthy, incredibly out-of-touch businessman.

That means he likely sees Neuralink as just another business that he hopes will make him even richer than he already is. Even if he believes in the inherent value of the technology, he will exploit it for billions if it ever becomes a viable commercial product. That’s just how billionaires operate. And given the distressing tendency for billionaires to be psychopaths, it would be unwise to give Elon Musk or anyone like him access to your brain.

Now, I’ll say it again. This technology is important and what Neuralink achieved is a vital first step. The successful operation of one brain implant means it’s no longer on paper. This technology is officially real. Like the first person who drove a car or the first person to fly a plane, this is a major leap in our ongoing efforts to develop better and bolder technology.

We don’t know where this feat will lead or whether it’ll pan out in any meaningful way, but it’s worth being hesitant and a little extra cautious. It’s not wrong to trust in the sincere efforts of others who want to improve the lives of others. But it’s always wise to be skeptical of the intentions of unscrupulous billionaires with inflated egos.

Leave a comment

Filed under Artificial Intelligence, Neuralink, technology

Update On (Non) AI George Carlin Special: New Details And Implications

Things tend to move quickly in the world of technology and artificial intelligence. And since the arrival of ChatGPT, the pace seems to have accelerated. I try to keep up with it as best I can. And I only really write or make videos about it if I have something meaningful to contribute.

But there are some stories I follow more closely for personal reasons. One such story has been the controversy surrounding the AI-generated George Carlin special that came out in early January 2024. I certainly hope I made my feelings on that issue clear. As much as I support the development of AI, I felt like this was in really poor taste.

It didn’t sound much like George Carlin. The humor, while funny at times, didn’t hit as hard as Carlin’s unique brand of comedy. And, most importantly, it was done without the consent of George Carlin’s family, namely his daughter, Kelly Carlin. It’s one thing to impersonate a dead celebrity with living relatives. It’s quite another to use an AI to create what could amount to a post-mortem piece of work that’s done in the mold of that celebrity.

Well, not surprisingly, Kelly Carlin didn’t stop at just scorning the creators of this AI special. According to the Hollywood Reporter, she filed a lawsuit against the creators of the comedy special on the basis that it’s an unauthorized use of George Carlin’s copyrighted work, while also illegally using Carlin’s name and likeness for promotional purposes.

Make no mistake. This case will be of immense interest to those concerned about how we use, abuse, and develop AI in the future. This, on top of new concerns about deepfakes for artists like Taylor Swift, is set to create some major precedents. And they’re bound to affect users, companies, and the nature of the AI’s themselves.

However, the filing of this lawsuit has added a new wrinkle to this controversy that is definitely worth acknowledging. It has since come out that the script for this “special” wasn’t a result of an AI generating it from a compilation of all of Carlin’s past works, as claimed by the creators. It was entirely written by a human named Chad Kultgen.

And, with all due respect to Chad Kultgen, his is no George Carlin. That really showed in the special.

Also, fuck Chad Kultgen for thinking he could match Carlin’s brilliance or even the capability of a non-human AI.

Now, this complication isn’t going to stop the lawsuit. If anything, it might strengthen the case of Carlin’s estate because it makes this special a much clearer violation of illegally using someone’s likeness or brand for promotional purposes. It might even raise to the level of fraud by claiming to be the product of an AI, but was actually just some guy trying to write a special and using George Carlin’s likeness to sell it.

And, regardless of how you feel about the legal implications here, that’s still a dick move. I can’t imagine that makes this any less difficult for Kelly Carlin or the rest of the Carlin family.

But, based on what we now know in wake of this lawsuit, the only thing AI was used for in making this special was to create the not-so-compelling voice that delivered this human-made script and the images included within the video. In terms of capabilities, that’s not a very impressive use of AI. If anything, that makes this special even less impressive.

One of my biggest criticisms of it was that it sounded nothing like Carlin in terms of tone and style. Considering that there are many existing AI tools right now that can closely mirror the voices of celebrities, this just makes the creators of this special seem cheap and lazy. But the script being entirely human-written does help add another important perspective.

As impressive as AI is at the moment, it is a long way from fooling anyone that it could mimic the nuance or mannerisms of a human, let alone one as special as George Carlin. I do wonder if those behind this special actually tried, but quickly realized it couldn’t be done. I also wonder if maybe someone who is even more unscrupulous will actually try again at some point, possibly with better AI tools.

If they do, then this lawsuit becomes even more important. Because the law tends to move much slower than technology. And if AI continues to advance, then incidents like this will become more common. For now, only time will tell what this will mean for the future of AI and those it impacts, living or dead.

Leave a comment

Filed under AI Art, Artificial Intelligence, funny, technology

An AI Version Of George Carlin Just Released A Comedy Special (And That’s Disturbing)

There are certain celebrities and pop culture icons that we tend to appreciate more after they’ve passed away. Like everyone else, they too must face death at some point in their illustrious lives. And when they do, all the work they did becomes inherently more precious because it is now finite. There can be nothing new added to their body of work. It’s as set as the strongest bedrock. It might be upsetting to many fans, but it’s just a fact of life.

Then, artificial intelligence comes along and suddenly, there’s a crack in that bedrock.

Even as someone who has closely followed and written about artificial intelligence for years, I did not see this coming. And it took a form I never would’ve expected.

Recently, a couple of comedians utilizing an unspecified AI created an hour-long comedy special featuring the late, great George Carlin. That “special,” and I use that term loosely, is currently on YouTube right now. For reasons that I hope are understandable, I’m not going to link to it. Just know that, as of me writing this, it has racked up over 427,000 views and 11,000 likes.

Now, in general, I support the development of artificial intelligence. I still genuinely believe it is the most important technological advancement we could develop as a species. But this particular manifestation of its use has me both concerned and distressed.

For one, George Carlin is my favorite comedian. I would go so far as to call him the single greatest comedian of all time. And that’s an opinion many share. Ask most people who the best comedian of all time is and George Carlin is usually in the top five alongside the likes of Richard Pryor, Lenny Bruce, Rodney Dangerfield, Sam Kinison, and Robin Williams.

To me, George Carlin is the best and there’s no close second. He’s one of those comedians who didn’t just find success over the course of multiple decades. You can still go back and watch his specials from the 1970s and his material is still relevant. His jokes, his style, and his brilliant insights are still on point, as well as hilarious.

George Carlin really was a one-of-a-kind artist, comedian, and philosopher. And when he passed away in 2008, his death hit a lot harder than most. But his legacy has only strengthened in that time as more people discover or re-discover his comedy. But this AI version of him is sure to complicate that legacy and for all the wrong reasons.

For one, this “special” was done without permission, review, or compensation to George Carlin’s family. His daughter, Kelly Carlin, has gone on record as stating that she did not authorize this, let alone receive compensation for it. This was not an instance of a celebrity willingly licensing their voice or likeness to a company, as James Earl Jones did with Disney in 2022. This was done without any consideration whatsoever for how Carlin’s family might react to this or what it might do to his legacy.

As for the special itself, I do admit I have listened to it. And I can confirm that it sounds nothing like George Carlin. The voice sounds like a terrible impersonation of George that someone half-drunk would try to do on a five-dollar bar bet. I promise nobody is ever going to mistake the voice in this “special” for George Carlin. I don’t know if that’s by design or just a limit of the current technology.

But with regards to the actual substance, this is where it gets even more distressing. Because in many instances, there are jokes and comments that actually sounds like something Carlin would say. Yes, they lack his distinct delivery, tone, and mannerisms. But many of his comments on politics, pop culture, people, and society really do feel like they might come up in a real special.

That said, there are also instances where it says things Carlin probably wouldn’t say. There’s nothing too egregious. He never says anything that sounds so out-of-character that it’s jarring. But it’s also clear that the content was edited. There are distinct breaks in the audio that indicate someone edited this to sound more refined. That leads me to believe that, whatever AI was used to make this definitely conjured some content that would’ve been too obviously out-of-character.

I’ll even go so far as to say that there are some parts of the special that are funny. It still doesn’t resonate as well as one of Carlin’s classic comedy specials. But there are jokes in this special that probably will make some people laugh. It’s not entirely clear if those jokes were purely the product of the AI or if those behind it tweaked the language. But the humor is there.

Be that as it may, just listening to a small part of this special feels off. Beyond just sounding painfully synthetic, it establishes a precedent that the estates of other dead celebrities would be wise to consider. Because George Carlin isn’t the only one with a long catalog of comedy, writings, interviews, and movie credits.

Who’s to say that an AI like this won’t do the same for Robin Williams or Richard Pryor? It wouldn’t even be restricted to comedians, either. The works of musicians like John Lennon, Kurt Cobain, Michael Jackson, and Elvis Prestly could also be used to create entirely new songs that might sound eerily similar to something they might have created.

If that happens, what does it do to their legacy? What happens if an AI creates something in the mold of a dead celebrity that says or does something outrageous, offensive, or detrimental? What would it do to their legacy? What would it do to their still-living families?

I honestly can’t imagine how someone like Kelly Carlin must feel hearing a poor imitation of her dead father. But I can easily imagine other celebrity heirs or estates willingly licensing their dead relatives to make extra money somehow.

It could mean we’ll be seeing more strange, distressing content like this in the coming years. And as AI technology continues to improve, it will likely get to a point where it’s so capable that it’s almost too believable. I don’t claim to know what kind of world that might create or lead to. I just know that “specials” like this won’t stop with George Carlin.

4 Comments

Filed under AI Art, Artificial Intelligence, ChatGPT, technology

Napster, Generative AI, And How The Internet’s Past May Define Its Future

This is a video from my YouTube channel, Jack’s World.

This video explores a major chapter in the history of the internet to help inform what may lay ahead for the future. And for those of a certain demographic, one of the most disruptive programs during the early days of the internet was Napster.

This famous/infamous program completely upended the music industry. It also triggered lawsuits and upheavals from which the industry has never recovered. How it all played out was complicated, but the end result set a precedent that has helped shape the course of the internet.

Now, with the rise of generative AI, a similar predicament is playing out. And we would all be wise to heed the lessons learned from Napster.

Leave a comment

Filed under AI Art, Artificial Intelligence, ChatGPT, Jack's World, technology, YouTube