Creatives are fighting back against AI with lawsuits

Will legal action force AI companies to change how they train their programs?

Gavel depicted on a canvas
Without AI regulations, artists have turned to lawsuits to push back
(Image credit: Illustrated / Getty Images)

Since last fall, experts have touted generative artificial intelligence as both a groundbreaking innovation in tech and a potential threat to humanity. The rapid growth of this nascent technology has outpaced the law, and AI has remained largely unregulated. Some creatives have grown impatient waiting for governments to step in and are banding together to push back against AI companies with a flurry of lawsuits.

A growing number of visual artists, writers, musicians and filmmakers argue that generative-AI companies illegally train their systems with their copyrighted work. They are rebelling against the tide of generative AI in court and with mass petitions. "The data rebellion that we're seeing across the country is society's way of pushing back against this idea that Big Tech is simply entitled to take any and all information from any source whatsoever and make it their own," Ryan Clarkson, the founder of a law firm behind two class-action lawsuits against Google and OpenAI, told The New York Times.

How have creatives challenged AI companies?

AI companies face at least 10 lawsuits from individuals and corporations accusing them of illegally scraping their data to train their systems, the Times reported. Last fall, a group of programmers filed a class action lawsuit against Microsoft and Open AI, the company behind Chat-GPT, alleging that the companies violated the copyrights on their code. Getty Images sued Stability AI in January, alleging that the company used copyrighted images to train its text-to-image generator.

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.


Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

Recently, a group of writers followed suit. Comedian Sarah Silverman joined authors Christopher Golden and Richard Kadrey in suing OpenAI and Meta, alleging that the companies used pirated copies of their books to train their chatbots. The authors claim the companies scraped their copyrighted data from "shadow library" websites like Bibliotik, Library Genesis and Z-Library. Authors Mona Awad and Paul Tremblay filed a similar lawsuit against OpenAI around the same time.

For artists and writers who are on their own, lawsuits can be both time-consuming and expensive. "Fighting back against AI systems has meant rethinking where they publish," the Times added. After a group of fan-fiction writers found that ChatGPT was mimicking the style of their stories posted on the fan-fiction database Archive of Our Own, they removed their content and "wrote subversive content to mislead the AI scrapers."

Over 9,000 authors signed an open letter from The Authors Guild that called out the "inherent injustice" in generative-AI companies "exploiting" their work without consent or compensation. The authors include Margaret Atwood, James Patterson and Jodi Picoult.

How could this shape the future of generative AI?

The spate of lawsuits and proposed regulations could "pose the biggest barrier yet" to the integration of AI tools, The Washington Post stated. Eventually, AI companies will rely on more curated data sets for training. The current practice of scraping unfiltered data from the internet will seem archaic," Margaret Mitchell, the chief ethics scientist at AI startup Hugging Face, told the Post. The system needs to change, and it's "unfortunate that it needs to change via lawsuits, but that is often how tech operates," Mitchell added. She also said she wouldn't be surprised if the lawsuits or new regulations forced OpenAI to delete one of its models by the end of the year.

The New York Times pointed out that the "data protests may have little effect in the long run." The "deep-pocketed tech giants" already have a ton of "proprietary information and have the resources to license more," the outlet added. However, "as the era of easy-to-scrape content comes to a close," smaller companies might have difficulty getting enough data to train their systems, the Times opined.

Dr. Rumman Chowdhury, a Responsible AI Fellow at Harvard University, told NPR that there's a lot of talk about regulations, but nothing concrete has come out yet. Whatever happens, the path forward won't be easy. "Some of it will be litigated, some of it will be regulated, and some of it people will literally just have to shout until we're heard," she said.

To continue reading this article...
Continue reading this article and get limited website access each month.
Get unlimited website access, exclusive newsletters plus much more.
Cancel or pause at any time.
Already a subscriber to The Week?
Not sure which email you used for your subscription? Contact us