Tag Archives: AI ethics

An AI-Generated Comic Was Denied Copyright Protection (And Why That’s A Big Deal)

Every now and then, a story slips under the radar of a much larger, but closely related story. The larger story makes more headlines and attracts more attention, but the smaller story might end up having a far more lasting impact.

That seems to be happening a lot with news involving artificial intelligence and the various AI tools that have emerged in recent years. I’ve already talked about plenty, giving my opinion on the rise of ChatGPT and showing off some AI-Generated artwork I made. There’s so much going on in this field that it’s hard to keep up with, let alone discuss.

But recently, one of those little stories caught my attention. It involves the same AI-Generated art I mentioned earlier and comic books, something for which I’ve shared my passion for in many forms. And it’s a story that I don’t think is getting enough attention.

It has to do with a comic called Zarya of the Dawn, a comic created by Kris Kashtanova. It’s not published by Marvel, DC, Dark Horse, Image, or any other mainstream comic publisher. You can actually download it right now for free. But what makes this comic different isn’t the story, writing, or style. It’s how it was made.

This comic was written by a person.

However, all the artwork inside was created with AI-generating art tools, most notably MidJourney.

That, in and of itself, is quite remarkable. The visuals within this book are certainly eye-catching. They might not rank on the same level as a Jim Lee or a Jack Kirby, but it’s a solid visual spectacle that brings to life a story.

For people like me, who cannot draw and don’t have the money to pay artists to depict the stories we want to tell, this is truly remarkable. I would go so far as to say it’s genuinely exciting. It shows just what’s possible with these tools. A writer with no drawing skills was able to produce this comic using only an AI art generating tool with text prompts. And the end result is stunning.

But this is where the story takes a turn. When Kashtanova attempted to copyright this comic, the US Copyright Office issued a surprising decision that might very well set a major precedent moving forward. Because the comic used AI to create the artwork, it could not be granted copyright protection. This was the exact statement, according to Ars Technica:

“We conclude that Ms. Kashtanova is the author of the Work’s text as well as the selection, coordination, and arrangement of the Work’s written and visual elements. That authorship is protected by copyright. However, as discussed below, the images in the Work that were generated by the Midjourney technology are not the product of human authorship.”

That bolded part is my doing because that’s the section with the biggest implication. This is the US Copyright Office stating outright that images and artwork created by AI can’t get copyright protection. That means that every piece of AI art you create for whatever reason can’t be owned by you in any legal sense. Because technically, you didn’t make it. The program made it for you.

Without getting too deep into the legal issues, I don’t think enough people realize the ramifications this might have for the future of the comics industry and for the art industry as a whole. On the comics side, there are actually two sides to consider.

On one, this technology will allow ordinary people with little to no art skills to produce comics with quality artwork. People who never once had the skills or means to make comics could suddenly start producing them on their own without a publisher or a skilled artist.

That means many great comics that wouldn’t have otherwise been made can be made. Great stories that once only existed with words could be brought to life through beautiful renderings.

But on the other side, the absence of copyright protection is an issue. Yes, these comics could bring to life amazing stories. However, the creators won’t be able to monetize their work, nor would they be able to stop others from using it for their own ends.

That means that, in theory, you or I could create a beautiful comic with this technology. It could find a massive audience and become a beloved story with countless fans. Then, a big company like Disney or Warner Brothers could come in, take the story and the depictions, and basically turn it into their own entertainment product. And since they have more resources and better lawyers, the creators likely wouldn’t get a penny of the profits.

This story also doesn’t account for how those same companies might use this technology to further undercut their workers and creators. Comic companies already have a not-so-great reputation for screwing over writers and artists who create iconic characters. Just look at what happened between Jack Kirby and Marvel for a hint of those issues.

If these same companies can use this same AI technology to produce more comics while not having to pay their artists or writers as much, they will do it. They’re a business. They’ll jump at any chance to pay less to get more. It’s cold, callous, and uncreative. But that’s the world we live in.

Add other tools like ChatGPT into the mix and it’s entirely possible that an AI could create an entire comic from scratch. And everything within it, from the art to the story to the characters, could not be copyrighted in any way. At a certain point, the AI might get so good that it would be hard to tell if there was ever a human creator to begin with.

These are all strange scenarios, equal parts exciting and distressing. We’re already seeing so much change as a result of these new tools, but I don’t think we’ve even seen a fraction of what’s possible. As AI technology improves, art generation and storytelling will change a great deal. The comics industry is more vulnerable than most, as Zarya of the Dawn just proved.

It’s hard to know what this will lead to. But whatever happens, it all started with this story and the precent it set.

Leave a comment

Filed under AI Art, Artificial Intelligence, ChatGPT, superhero comics, technology, writing

Big Tech, AI Research, And Ethics Concerns: Why We Should All Worry

In general, I root for technology and technological progress. Overall, I believe it has been a net benefit for humanity. It’s one of the major reasons why we’ve made so much progress as a global society in the past 100 years.

I’ve sung the praises of technology in the past, speculated on its potential, and highlighted individuals who have used it to save millions of lives. For the most part, I focus on the positives and encourage other people to have a less pessimistic view of technology and the change it invites.

That said, there is another side to that coin and I try not to ignore it. Like anything, technology has a dark side. It can be used to harm just as much as it can be used to hurt, if not more so. You could argue that we couldn’t have killed each other at such a staggering rate in World War II without technology.

It’s not hyperbole to say that certain technology could be the death of us all. In fact, we’ve come distressingly close to destroying ourselves before, namely with nuclear weapons. There’s no question that kind of technology is dangerous.

However, artificial intelligence could be far more dangerous than any nuclear bomb. I’ve talked about it before and I’ll likely bring it up again. This technology just has too much potential, for better and for worse.

That’s why when people who are actually researching it have concerns, we should take notice. One such individual spoke out recently, specifically someone who worked for Google, an organization with deep pockets and a keen interest in Artificial Intelligence.

According to a report from the Associated Press, a scholar named Timnit Gebru expressed serious concerns about Google’s AI research, specifically in how their operating ethics. For a company as big and powerful as Google, that’s not a trivial comment. This is what she had to say.

AP News: Google AI researcher’s exit sparks ethics, bias concerns

Prominent artificial intelligence scholar Timnit Gebru helped improve Google’s public image as a company that elevates Black computer scientists and questions harmful uses of AI technology.

But internally, Gebru, a leader in the field of AI ethics, was not shy about voicing doubts about those commitments — until she was pushed out of the company this week in a dispute over a research paper examining the societal dangers of an emerging branch of AI.

Gebru announced on Twitter she was fired. Google told employees she resigned. More than 1,200 Google employees have signed on to an open letter calling the incident “unprecedented research censorship” and faulting the company for racism and defensiveness.

The furor over Gebru’s abrupt departure is the latest incident raising questions about whether Google has strayed so far away from its original “Don’t Be Evil” motto that the company now routinely ousts employees who dare to challenge management. The exit of Gebru, who is Black, also raised further doubts about diversity and inclusion at a company where Black women account for just 1.6% of the workforce.

And it’s exposed concerns beyond Google about whether showy efforts at ethical AI — ranging from a White House executive order this week to ethics review teams set up throughout the tech industry — are of little use when their conclusions might threaten profits or national interests.

I bolded that last sentence because I think it’s the most relevant. It’s also the greatest cause for concern. I suspect Ms. Gebru is more concerned than most because the implications are clear.

When a tool as powerful as advanced AI is developed, who gets to determine how it’s used? Who gets to program the ethical framework by which it operates? Who gets to decide how the benefits are conferred and the harms are reduced?

Moreover, how do you even go about programming an AI with the right kind of ethics?

That’s a very relative question and one we can’t avoid if we’re going to keep developing this technology. I’ve tried to answer it, but I’m hardly an expert. Ms. Gebru was definitely in a better position than me or most other people with a passing interest in this field.

Then, she gets fired and starts expressing concerns publicly. The fact that she can and Google isn’t facing much in terms of repercussions should be concerning. It may also be a sign of the larger challenges we’re facing.

Google, like many other organizations researching advanced AI, is a profit-seeking tech company. They’re not some utopian technocrats. They’re a business who is obligated to make their investors happy. Advanced AI will help them do that, but what kind of consequences will that invite?

If profit is the primary motivation of an advanced AI, then what happens when it encounters a situation where profit comes at the cost of lives? There are already human-run companies that make those decision and people die because of them. An advanced AI will only make it many times worse.

Once an artificial intelligence system is as smart as a human, it’s going to be capable in ways we don’t expect and can’t control. If it’s ethics and goals aren’t aligned with us, then what’s to stop it from wiping humanity out in the name of profit?

It’s a distressing thought. It’s probably a thought that has crossed Ms. Gebru’s mind more than once. She may know how close or far we are to that point, but the fact that this is already a conflict should worry us all.

We’ve already become so numb to the greed and excesses of big business. Tech companies may conduct themselves as this team of future-building visionaries intent on making the world a better place, but the profit motive is still there. Like it or not, profit is still a hell of a motive.

Eventually, artificial intelligence will get to a point where it will either adopt our ethics or choose to formulate its own, which may or may not align with ours. When that happens, no amount of profit may be worth the risk.

Now, we’re still a ways off from an artificial intelligence system on that level, but it’s still quite possible that there are people alive today who will grow up to see it. When that time comes, we need to be damn sure these systems have solid ethical frameworks in place.

If they don’t, we really don’t stand a chance. We’re a society that still kills each other over what we think happens when we die without seeing the irony. Even a marginally advanced AI will have no issues wiping us out if we make doing so profitable.

Leave a comment

Filed under Artificial Intelligence, technology