This is a video from my YouTube channel, Jack’s World.
This video is part speculation and part thought experiment on the impact of artificial intelligence and the entertainment industry. Since the WGA/SAG strikes of 2023, the impact of AI on entertainment is impossible to ignore. While the technology is still in a very early stage, we’re already seeing it affect the course of multiple industries, but entertainment might be the most profound.
What could this mean for consumers?
What could it mean for the companies, studios, artists, and workers that produce our entertainment?
It’s difficult to determine at this early stage, but I make an effort to imagine what artificial intelligence could mean for the future of entertainment.
I understand why the exist. I understand that content creators, media outlets, and individuals need to make money for their efforts. It’s a necessary evil to fund the many good things the internet provides.
That being said, fuck paywalls. They’re one of the most annoying things to manifest on the internet that don’t involve whiny trolls with too much free time.
With each passing year, it seems, more and more websites are resorting to paywalls to restrict or limit access to their content. This is especially common with news outlets, including major newspapers like the Washington Post and the New York Times. But I’m also seeing it on smaller outlets that aren’t major newspapers, especially those that cover sports, entertainment, or various genre news.
They’re all annoying as hell. They’re basically asking for a newspaper subscription that can range anywhere from a couple of dollars to the equivalent of a yearly Netflix subscription. At a time when most people are already annoyed by the amount of subscriptions they have to pay for, this is making a bad problem much worse. It’s essentially segmenting the internet in ways that run counter to how it began.
But rather than lament on this annoying trend, I’d like to offer some guidance to those who hate paywalls as much as I do. Even if you too understand why they exist, nobody will blame you for not wanting to purchase an entire subscription just to read one or two articles. For those instances, there are some ways to get around that.
For the sites that offer a handful of free articles before requiring a subscription, you could always use a VPN. These VPNs, also known as Virtual Private Networks, basically mask your IP address so that it always seems like you’re coming from a new location. While some sites are aware of this tactic, it’s very difficult for them to filter it out. VPNs offer many other uses aside from getting around paywalls. They do come at a cost, but you do get a lot out of them.
But if you don’t want to invest in a VPN, another way to get around paywalls is to use certain free sites. They don’t necessary get around the paywall. They just allow you to see the content in a more raw format. It’s like seeing copying the sheet music instead of the song and letting another program play it. All you do is put in the URL to an article or site section into the site and let it work.
There a number of sites that do this. Not all of them work on every site, but here are the ones that work best.
Please note that these don’t work for every site. For some media outlets, you will get errors or your browser will just freeze. But in most cases, you can usually expect to get around the paywall and at least read the main substance of an article.
It works for now. I don’t doubt for a second that many outlets are already looking for a way to block this sort of thing. And if there comes a day when all paywalls can be circumvented, then those same outlets will find another annoying way to fund their operations.
Again, I get why this happens. Before paywalls, there were all sorts of insufferable pop-up ads, banner ads, and mailing lists. Once those fell out of favor, site operators looked for another way to monetize their content. Paywalls are just the latest manifestation of that effort.
They’re annoying and most people still hate them. But, so long as they exist, we need to live with them and know how to get around them. I hope this helps.
I doubt anyone will disagree with that sentiment. No matter who you are or how many years you’ve been out of school, you probably don’t miss doing homework. It’s one of those special shared hatreds reserved only for traffic jams, parking tickets, and slow internet. But unlike those undeniable frustrations, homework isn’t an inescapable force of nature or law. It’s something we, as a society, choose to continue.
I’ve certainly questioned that choice, going back to when I was still in school. Having to do homework was among the many reasons why I was so miserable in school. And even though it was required, I can’t honestly say it ever helped me learn anything. Most teachers and administrators often explained why it was important to ensure we were adequately learning the material. But as I’ve gotten older, I’ve come to realize that, even if that were a valid reason, it still was still ineffective.
Just ask yourself honestly. Did you ever do homework because you were curious and wanted to learn?
Now, I could rant and lament on why homework sucks for days on end. But rather than torture myself to such an extreme, I wanted to highlight something that might offer hope to those who still remember how much homework sucked, as well as those currently in school at this very moment. It has to do with the impact of artificial intelligence and chatbots like ChatGPT.
I know I’ve talked a lotabout artificial intelligencein the past. I’ve also highlighted the impact and hype surrounding ChatGPT. It is definitely one of the most intriguing and disruptive technologies to come along in decades. But unlike other discussions about whether artificial intelligence and ChatGPT will lead to the destruction of the human race, this is one issue in which the impact is already happening.
Recently, Vox produced an intriguing video about how ChatGPT has impacted education, especially homework. Even as someone who graduated school years ago, I found the issues and insights of this video remarkable. I encourage everyone to check it out.
The long and short of it is simple. ChatGPT is rendering most homework assignments, be they essays or worksheets, obsolete. Students are using ChatGPT to basically do the bulk of the work for them. The only real effort they need to do is make sure that whatever they produce isn’t obviously the product of a chatbot.
That alone can be difficult. It is well-documented that chatbots like ChatGPT can be inaccurate. But when compared to having to do a long, boring assignment that a student probably isn’t interested in, that kind of challenge seems manageable by comparison.
Also, in the interest of full disclosure, I freely admit that I probably would’ve used ChatGPT when I was in school if I had access to it. I promise it wouldn’t be entirely out of laziness or an unwillingness to learn. I just found most homework assignments to be so dull and pointless that I cared more about just getting them done rather than actually learning anything.
I imagine I’m not the only one who feels this way. I suspect the majority of students simply see homework as a means of ensuring grades rather than actually learning something. And even if that assumption is flawed, it’s still an issue that speaks to major flaws in how we educate ourselves and others.
And until ChatGPT, it was easy to ignore that issue. Schools, teachers, and administrators had no reason to stop giving homework or question whether it was an effective tool. It was just one of those things that our education system had always done. So, why not keep doing it?
Well, now there’s a valid reason. Homework, as we know it, can be easily completed by any student with an internet connection. If there was any learning potential, it’s pretty much lost. As the Vox video stated, it has led schools and educators to consider an entirely new recourse.
The knee-jerk response that I suspect most will adopt is to try and ban or limit the use of chatbots. There are software programs out there that can help detect content that has been generated by a chatbot. However, I liken these programs to using scotch tape to seal the ever-widening cracks of a faulty foundation.
Because, like it or not, these AI chatbots are becoming more advanced. And the tools to keep up with them are always going to lag behind. That is a losing race and one no education system should attempt.
There’s even precedent for surmising why that’s a bad approach. When I was in college, there was a blanket ban on using Wikipedia. But enforcing that ban was a losing battle that caused more problems than it solved. It also created some nasty situations where students were accused of plagiarism when they did nothing of the sort. It took a few high-profile incidents, but most schools eventually came to embrace Wikipedia as a useful tool when approached correctly.
I think the impact of chatbots will have to go through a similar process. But unlike Wikipedia, the application of chatbots is a lot broader. These are tools that can effectively summarize books, write essays, and even write poetry with a few simple prompts. And in the same way young people have become more tech savvy than their parents, I suspect they’ll become more adept than most with respect to navigating chatbots.
That means homework, as we’ve been doing it for the past several decades, will be obsolete. While that’s certainly cause for celebration for many, it’s also an opportunity to take a step back and evaluate the process of education, as a whole.
It’s still very important that we educate young people in a meaningful way.
It’s also important to acknowledge that young people today will have access to resources that others have not.
If homework is no longer useful in that regard, what else could we do? What’s a more effective way to teach kids a concept, even when they’re not that motivated to learn it?
I don’t claim to know the answers. I am not a teacher, but I do remember how miserable I was in school and how little it really taught me. Hopefully, the impact of chatbots like ChatGPT will prompt a more thorough evaluation of how we approach education. Because if we keep clinging to old methods, then nobody will benefit in the long run, especially kids.
This is a video from my YouTube channel, Jack’s World.
This video explores my thoughts, feelings, and overall sentiments surrounding AI art. You might have noticed that some of my more recent videos have been incorporating more AI art lately. I do so with the understanding there is a fair amount of controversy and concern surrounding this technology. I do my best to address that while offering what I hope is a balanced perspective on AI art and the use of generative AI. Enjoy!
I know it’s been a while since I talked about Neuralink. A few years ago, I followed newsabout this endeavorclosely because I genuinely believed that brain/computer interfaces was an important technology to develop. These days, I still believe that. I think it has only become more vital as the progress in artificial intelligence has accelerated.
However, there’s a reason why I haven’t talked much about Neuralink in quite a while. I admit some of that is because some pretty significant world events got in the way of following technology trends. However, a much bigger part of my disinterest came from the one behind Neuralink, Elon Musk.
The long and short of it is this. I once respected and admired Elon Musk for his efforts to develop new and emerging technology. But then, once I started scrutinizing him, his background, his business practices, and his tendency to make ridiculous promises that he can never keep, I came to the conclusion that Elon Musk isn’t just an unscrupulous businessman. He’s an asshole.
Believe me, there’s a lot I could say about him, but I prefer not to. Insulting someone with billions of dollars and access to high-priced lawyers is risky these days. I’ll just say that the combination of COVID-19 and buying Twitter didn’t really break Musk. It just exposed the kind of person he really is and that person is not one worthy of respect.
Now, with respect to Neuralink, there is still some real value behind the company, the efforts, and the technology. Integrating the human brain with technology isn’t just a promising field. It might very well be necessary if humans are to adequately adapt to a future in which there will likely be machines much smarter and more capable than any person who has ever lived. Even if AI never becomes as intelligent as an average human, having humans that can interact with it on a more intimate level could fundamentally change our species and our society for the better.
And to that end, Neuralink has officially taken a critical first step. According to NPR, the company has successfully implanted its first brain chip into a human participant. We don’t know many details, nor do we know the identity of the person who received the chip. We only know what was conveyed in the announcement, which is rarely something you can take at face value with Elon Musk.
But even if you don’t trust Musk, and you shouldn’t, this feat has been in the works for a while. Neuralink has been actively recruiting volunteers for implants for over a year now. And this effort was authorized by the United States Food and Drug Administration. The primary participants, at least for now, are those who have suffered brain or spinal chord injuries. So, it’s not like Neuralink is accepting applications from those who just want a brain chip for no reason.
That approach is to be expected. Even Musk has said that the initial efforts with Neuralink will focus on helping paraplegics or those suffering from conditions like ALS. If and when the technology matures, then it’ll expand access to other users, but still for therapeutic purposes. Eventually, it would get to a point where brain implants aren’t just treatments. They would be like smartphones that people willingly purchase and have implanted.
And it’s that last part where I believe proponents of this technology should temper their hopes. Until we know more about the patients, the nature of the brain implants, and the impact on the participants, nobody should be eager to get one themselves. These brain implants are not about to become the next iPhone. This is a technology that needs much more investment, refinement, and development.
And if history is any guide, you really shouldn’t trust someone like Elon Musk to deliver that level of advancement. Despite the persona he tries to convey, he is not Tony Stark. He is not a brilliant scientist or inventor. He just hires those kinds of people and has a nasty habit of screwing them over.
He is, and always has been, a salesman first. He may very well be a genuine futurist trying to make a better future, but his tendency to get into petty feuds on social media and say objectively dumb things should give everyone pause. But at the end of the day, he’s an obscenely wealthy, incredibly out-of-touch businessman.
That means he likely sees Neuralink as just another business that he hopes will make him even richer than he already is. Even if he believes in the inherent value of the technology, he will exploit it for billions if it ever becomes a viable commercial product. That’s just how billionaires operate. And given the distressing tendency for billionaires to be psychopaths, it would be unwise to give Elon Musk or anyone like him access to your brain.
Now, I’ll say it again. This technology is important and what Neuralink achieved is a vital first step. The successful operation of one brain implant means it’s no longer on paper. This technology is officially real. Like the first person who drove a car or the first person to fly a plane, this is a major leap in our ongoing efforts to develop better and bolder technology.
We don’t know where this feat will lead or whether it’ll pan out in any meaningful way, but it’s worth being hesitant and a little extra cautious. It’s not wrong to trust in the sincere efforts of others who want to improve the lives of others. But it’s always wise to be skeptical of the intentions of unscrupulous billionaires with inflated egos.
Things tend to move quickly in the world of technology and artificial intelligence. And since the arrival of ChatGPT, the pace seems to have accelerated. I try to keep up with it as best I can. And I only really write or make videos about it if I have something meaningful to contribute.
But there are some stories I follow more closely for personal reasons. One such story has been the controversy surrounding the AI-generated George Carlin special that came out in early January 2024. I certainly hope I made my feelings on that issue clear. As much as I support the development of AI, I felt like this was in really poor taste.
It didn’t sound much like George Carlin. The humor, while funny at times, didn’t hit as hard as Carlin’s unique brand of comedy. And, most importantly, it was done without the consent of George Carlin’s family, namely his daughter, Kelly Carlin. It’s one thing to impersonate a dead celebrity with living relatives. It’s quite another to use an AI to create what could amount to a post-mortem piece of work that’s done in the mold of that celebrity.
Well, not surprisingly, Kelly Carlin didn’t stop at just scorning the creators of this AI special. According to the Hollywood Reporter, she filed a lawsuit against the creators of the comedy special on the basis that it’s an unauthorized use of George Carlin’s copyrighted work, while also illegally using Carlin’s name and likeness for promotional purposes.
Make no mistake. This case will be of immense interest to those concerned about how we use, abuse, and develop AI in the future. This, on top of new concerns about deepfakes for artists like Taylor Swift, is set to create some major precedents. And they’re bound to affect users, companies, and the nature of the AI’s themselves.
However, the filing of this lawsuit has added a new wrinkle to this controversy that is definitely worth acknowledging. It has since come out that the script for this “special” wasn’t a result of an AI generating it from a compilation of all of Carlin’s past works, as claimed by the creators. It was entirely written by a human named Chad Kultgen.
And, with all due respect to Chad Kultgen, his is no George Carlin. That really showed in the special.
Also, fuck Chad Kultgen for thinking he could match Carlin’s brilliance or even the capability of a non-human AI.
Now, this complication isn’t going to stop the lawsuit. If anything, it might strengthen the case of Carlin’s estate because it makes this special a much clearer violation of illegally using someone’s likeness or brand for promotional purposes. It might even raise to the level of fraud by claiming to be the product of an AI, but was actually just some guy trying to write a special and using George Carlin’s likeness to sell it.
And, regardless of how you feel about the legal implications here, that’s still a dick move. I can’t imagine that makes this any less difficult for Kelly Carlin or the rest of the Carlin family.
But, based on what we now know in wake of this lawsuit, the only thing AI was used for in making this special was to create the not-so-compelling voice that delivered this human-made script and the images included within the video. In terms of capabilities, that’s not a very impressive use of AI. If anything, that makes this special even less impressive.
One of my biggest criticisms of it was that it sounded nothing like Carlin in terms of tone and style. Considering that there are many existing AI tools right now that can closely mirror the voices of celebrities, this just makes the creators of this special seem cheap and lazy. But the script being entirely human-written does help add another important perspective.
As impressive as AI is at the moment, it is a long way from fooling anyone that it could mimic the nuance or mannerisms of a human, let alone one as special as George Carlin. I do wonder if those behind this special actually tried, but quickly realized it couldn’t be done. I also wonder if maybe someone who is even more unscrupulous will actually try again at some point, possibly with better AI tools.
If they do, then this lawsuit becomes even more important. Because the law tends to move much slower than technology. And if AI continues to advance, then incidents like this will become more common. For now, only time will tell what this will mean for the future of AI and those it impacts, living or dead.
There are certain celebrities and pop culture icons that we tend to appreciate more after they’ve passed away. Like everyone else, they too must face death at some point in their illustrious lives. And when they do, all the work they did becomes inherently more precious because it is now finite. There can be nothing new added to their body of work. It’s as set as the strongest bedrock. It might be upsetting to many fans, but it’s just a fact of life.
Then, artificial intelligence comes along and suddenly, there’s a crack in that bedrock.
Recently, a couple of comedians utilizing an unspecified AI created an hour-long comedy special featuring the late, great George Carlin. That “special,” and I use that term loosely, is currently on YouTube right now. For reasons that I hope are understandable, I’m not going to link to it. Just know that, as of me writing this, it has racked up over 427,000 views and 11,000 likes.
Now, in general, I support the development of artificial intelligence. I still genuinely believe it is the most important technological advancement we could develop as a species. But this particular manifestation of its use has me both concerned and distressed.
For one, George Carlin is my favorite comedian. I would go so far as to call him the single greatest comedian of all time. And that’s an opinion many share. Ask most people who the best comedian of all time is and George Carlin is usually in the top five alongside the likes of Richard Pryor, Lenny Bruce, Rodney Dangerfield, Sam Kinison, and Robin Williams.
To me, George Carlin is the best and there’s no close second. He’s one of those comedians who didn’t just find success over the course of multiple decades. You can still go back and watch his specials from the 1970s and his material is still relevant. His jokes, his style, and his brilliant insights are still on point, as well as hilarious.
George Carlin really was a one-of-a-kind artist, comedian, and philosopher. And when he passed away in 2008, his death hit a lot harder than most. But his legacy has only strengthened in that time as more people discover or re-discover his comedy. But this AI version of him is sure to complicate that legacy and for all the wrong reasons.
For one, this “special” was done without permission, review, or compensation to George Carlin’s family. His daughter, Kelly Carlin, has gone on record as stating that she did not authorize this, let alone receive compensation for it. This was not an instance of a celebrity willingly licensing their voice or likeness to a company, as James Earl Jones did with Disney in 2022. This was done without any consideration whatsoever for how Carlin’s family might react to this or what it might do to his legacy.
As for the special itself, I do admit I have listened to it. And I can confirm that it sounds nothing like George Carlin. The voice sounds like a terrible impersonation of George that someone half-drunk would try to do on a five-dollar bar bet. I promise nobody is ever going to mistake the voice in this “special” for George Carlin. I don’t know if that’s by design or just a limit of the current technology.
But with regards to the actual substance, this is where it gets even more distressing. Because in many instances, there are jokes and comments that actually sounds like something Carlin would say. Yes, they lack his distinct delivery, tone, and mannerisms. But many of his comments on politics, pop culture, people, and society really do feel like they might come up in a real special.
That said, there are also instances where it says things Carlin probably wouldn’t say. There’s nothing too egregious. He never says anything that sounds so out-of-character that it’s jarring. But it’s also clear that the content was edited. There are distinct breaks in the audio that indicate someone edited this to sound more refined. That leads me to believe that, whatever AI was used to make this definitely conjured some content that would’ve been too obviously out-of-character.
I’ll even go so far as to say that there are some parts of the special that are funny. It still doesn’t resonate as well as one of Carlin’s classic comedy specials. But there are jokes in this special that probably will make some people laugh. It’s not entirely clear if those jokes were purely the product of the AI or if those behind it tweaked the language. But the humor is there.
Be that as it may, just listening to a small part of this special feels off. Beyond just sounding painfully synthetic, it establishes a precedent that the estates of other dead celebrities would be wise to consider. Because George Carlin isn’t the only one with a long catalog of comedy, writings, interviews, and movie credits.
Who’s to say that an AI like this won’t do the same for Robin Williams or Richard Pryor? It wouldn’t even be restricted to comedians, either. The works of musicians like John Lennon, Kurt Cobain, Michael Jackson, and Elvis Prestly could also be used to create entirely new songs that might sound eerily similar to something they might have created.
If that happens, what does it do to their legacy? What happens if an AI creates something in the mold of a dead celebrity that says or does something outrageous, offensive, or detrimental? What would it do to their legacy? What would it do to their still-living families?
I honestly can’t imagine how someone like Kelly Carlin must feel hearing a poor imitation of her dead father. But I can easily imagine other celebrity heirs or estates willingly licensing their dead relatives to make extra money somehow.
It could mean we’ll be seeing more strange, distressing content like this in the coming years. And as AI technology continues to improve, it will likely get to a point where it’s so capable that it’s almost too believable. I don’t claim to know what kind of world that might create or lead to. I just know that “specials” like this won’t stop with George Carlin.
One of the biggest stories in 2023 was the SAG-AFTRA strike, which also coincided with the WGA strike. It was one of the most impactful and extensive labor disputes in recent history. Some would even argue it was the most important strike to ever occur in the entertainment industry. I’m certainly inclined to agree with that. That’s part of why I felt compelled to mention it.
The strike by both unions had many issues of concern, but one of the biggest had to do with artificial intelligence and emerging technology like generative AI. Both writers and actors alike were concerned that studios would start using AI to effectively replace them or use their past works to create derivatives without any compensation.
It was a very legitimate concern. I’m of the opinion it was the single most pressing issue for everyone involved in the entertainment industry. Artificial intelligence is no longer just the domain of sci-fi stories. Even the limitations of existing tools are capable of producing music, art, literature, and various other forms of content. Future tools promise to be even more effective.
Eventually, in the not-so-distant future, AI tools might get to a point where they can produce content at or near the quality of any human. They don’t even need to achieve human-level intelligence. They just need the right amount of refinement, investment, and engineering.
But even with this pressing issue, both the WGA and SAG-AFTRA largely won their dispute. They were able to get a contract that included provisions protecting them from future encroachments by AI. That is certainly an important, worthwhile consideration. It helps ensure that those working in the entertainment industry can continue to make a living.
Naturally, this has caused some concern and consternation, especially among voice actors who do a great deal of work in gaming. On the surface, it seems like an about-face. This seems like a manifestation of what they were protesting against during the strike, letting studios use AI to replicate iconic voices or figures in certain productions.
However, I don’t think it would be accurate to say this is a complete reversal. For one, SAG-AFTRA makes it clear that the intention of the deal is to ensure that those whose voices are licensed and utilized by AI receive appropriate compensation. This way, studios can’t just use a bunch of recordings of a famous voice and stick them in a game without paying the appropriate people. That is perfectly reasonable.
At the same time, there’s no denying that this will undermine current and future voice actors working within the gaming industry. The process for voice acting in most modern games can be extensive and expensive. A high-profile game like Grand Theft Auto V cost over $250 million to develop, unadjusted for inflation, and that game involved a lot of voiceover work.
Developing video games has only become more expensive over time, much more so than movies, TV shows, or music. Some are already estimating that the development of Grand Theft Auto VI could exceed $2 billion when all is said and done. No matter how you feel about video games or their development, that is not sustainable.
In that context, it makes sense to allow the use of AI tools to help cut down on that cost, especially as games and computing power increases. And it will certainly benefit those who have iconic, established voices in the video game industry.
But for those who aren’t in the industry yet or just aren’t well-known, this could be the first step in rendering voice acting in video games an unviable career path. It could become to acting what switchboard operators were to telephones.
I don’t doubt for a second that those in SAG-AFTRA are aware of this. They have to know on some levels that AI technology will continue to advance. They can’t fight it. They can’t stop it. They can’t prevent it from becoming more and more capable with each passing year.
And unlike video or artwork, mimicking voices is a more mature technology. It’s something that doesn’t really need much refinement to replace the work of a human voice actor. If they didn’t make this deal, then it just would’ve been harder to work around with each passing year.
Even if this deal is ultimately beneficial to actors and voice actors, it should still be a concern. It sets an early precedent for how unions, studios, and actors are attempting to accommodate advances in AI with their profession. While this primarily impacts the gaming industry, it is bound to affect others eventually.
There will come a point where AI is capable of replicating the imagery, mannerisms, and presence of any living actor. There will also come a point where AI is capable of producing animated content that’s difficult to distinguish from the kind that’s traditionally produced. What happens to the industry and those who work within it at that point? Would this deal that SAG-AFTRA did for gaming work? Or will it only delay further disruptions?
I don’t claim to know the answer. But if you’re involved in the entertainment industry in any way, this warrants serious attention. The existing AI tools we have now are plenty disruptive. They’re only going to get better with time, investment, and better engineering. The incentives are strong and at this point, there’s no putting the AI genie back in the bottle.
This technology isn’t just going to affect our lives. It’s going to affect how we make a living. If we don’t find a way to coexist with it, then that will only cause greater problems down the line. And it will impact far more than video game development.
This is a video from my YouTube channel, Jack’s World.
This video explores a major chapter in the history of the internet to help inform what may lay ahead for the future. And for those of a certain demographic, one of the most disruptive programs during the early days of the internet was Napster.
This famous/infamous program completely upended the music industry. It also triggered lawsuits and upheavals from which the industry has never recovered. How it all played out was complicated, but the end result set a precedent that has helped shape the course of the internet.
Now, with the rise of generative AI, a similar predicament is playing out. And we would all be wise to heed the lessons learned from Napster.
We live in strange, tense, and frustrating times. I know you could say that about any point in any era. But it feels like we’ve been saying that a lot late. I know I have. Just look at some of the posts I’ve made onor near Election Day in the United States. But as someone who lives less than two hours from Washington DC, I tend to feel the politics of these times more than most. And I’ve been around long enough to see some strange and troubling trends.
Then, there are certain acts or phenomena that are just plain stupid on a level that defies parody.
I generally try to empathize and understand where other people are coming from, especially if they have a different background or ideology from my own. There are just some instances where that’s not possible. The breadth of the stupidity is just too great.
That’s exactly how I feel about book bans. For reasons that are too fucking idiotic for me to paraphrase, there are real people living in real places in the United States of America who are advocating for book bans. Some are going so far as to burn them.
Again, this is not 1933. This is happening in 2023. That point is worth belaboring.
Now, I don’t want to name names or organizations. But you don’t have to look far to see who are advocating for book bans. You also don’t have to dig too deep to uncover what sort of ideology they ascribe to.
Here’s a hint. It’s the same ideology the requires stormtroopers, secret police, and prison camps.
But all you really need to know is that these efforts are usually the ones the villains in every TV show, book, or movie get behind. They see people reading books with ideas they don’t like. They worry that those same people, which include children and young adults needed for factories and war zones, embracing or identifying with those ideas.
But rather than confront those ideas, the book banning advocates would just prefer that people never know about those ideas in the first place. The evil, sadistic logic is that if people never read about it, then they can never think about it. And if they can never think about it, then they’re easier to control and guide.
That may not be the reason book banning advocates say out loud, but that is the effect. They’ll usually frame it as “protecting children” or “combating obscenity.” But don’t fall for that. At the end of the day, those who seek to ban books just want to eliminate ideas and stories they don’t like from the public consciousness.
That’s not conducive to protecting children and fostering a healthy society.
That’s a tactic for fascists, authoritarians, dictators, and general assholes.
Now, those tactics were certainly damaging in the past. Until very recently, books were the primary source of important information. If people didn’t have access to books, then they didn’t have access to knowledge, stories, and new ideas. Finding or preserving banned books used to take a concerted effort and many brave individuals put their lives at risk to further those efforts.
However, what makes modern book bans especially stupid is the simple fact that the internet exists. Libraries and book stores are no longer the lone repositories of knowledge and stories. Anyone with a smartphone can access more knowledge in five seconds than an entire university of academics could 50 years ago.
At this point, trying to ban books is akin to trying to censor telegrams. All they achieve now is raising the profile of these books they’re trying to ban. Hell, the book banning advocates might as well identify as free advertising because sales of banned books tend to spike whenever they bitch and moan about certain titles.
So, in addition to being a dick move, as well as tactics used primarily by fascists, it’s completely counterproductive. It wastes time, money, energy, and has the opposite effect of what’s intended. With that in mind, I have just one last message to those who still think banning books is a worthwhile endeavor.