Major litigation pertaining to artificial intelligence (AI) is rapidly developing, primarily focusing on copyright infringement, defamation, and liability related to the data used to train AI models and the outputs those models produce. Notable cases include lawsuits filed by artists, authors, and news publishers against prominent AI developers like OpenAI, Stability AI, and Microsoft.
Copyright and intellectual property lawsuits
These cases challenge the unauthorized use of copyrighted material to train generative AI models and the creation of derivative works. The central debate often revolves around the legal concept of "fair use".
Authors Guild v. OpenAI: A class-action lawsuit filed by the Authors Guild and multiple authors, including George R. R. Martin, that alleges OpenAI used their copyrighted books to train large language models (LLMs) without permission or compensation.
Getty Images v. Stability AI: A legal battle in which Getty Images accused Stability AI of infringing millions of photographs, including reproducing Getty's watermarks in some AI-generated outputs. A UK court largely rejected Getty's copyright claims in 2025, but the litigation continues elsewhere.
Andersen v. Stability AI: A class-action lawsuit brought by artists Sarah Andersen, Kelly McKernan, and Karla Ortiz against Stability AI, Midjourney, and DeviantArt. It alleges that the companies used copyrighted images to train their AI models and that the resulting outputs are infringing derivative works.
The New York Times v. OpenAI: The New York Times sued OpenAI and Microsoft, claiming the unauthorized use of millions of the newspaper's copyrighted articles to train AI models. The lawsuit alleges that AI-generated responses can sometimes reproduce Times content verbatim and directly compete with the publication.
Thomson Reuters v. ROSS Intelligence: In this case, Thomson Reuters alleged that legal research startup ROSS Intelligence copied its proprietary legal summaries (headnotes) to build a competing AI-powered service. In 2025, a federal court sided with Thomson Reuters, finding that the use was not fair use.
Defamation and liability cases
These lawsuits address situations where AI models have produced false information (known as "hallucinations") that is allegedly defamatory.
Walters v. OpenAI: In this early AI defamation lawsuit, a Georgia federal court dismissed the case against OpenAI. The plaintiff, Mark Walters, sued after a ChatGPT-generated response falsely stated he had embezzled funds. The ruling found that, under Georgia law, OpenAI was not a "publisher" of the allegedly defamatory AI output.
Clark v. OpenAI: This case, also filed against OpenAI, involves allegations that ChatGPT falsely claimed a lawyer was disbarred. The case is being closely watched as courts determine the extent to which AI developers can be held liable for false statements generated by their models.
Antitrust and market power
Some litigation has accused large tech companies of using their market dominance to unfairly benefit their AI products.
Chegg, Inc. v. Google LLC: Educational publisher Chegg filed a lawsuit alleging that Google used its proprietary educational content to train its AI models without compensation. The suit claims Google's practices harm competition and threaten the production of high-quality educational content by unjustly enriching itself.
Other notable AI-related cases
Martinez-Conde v. Apple: In October 2025, two professors filed a lawsuit against Apple alleging copyright infringement, similar to suits against other AI developers.
Intel v. VLSI: While not specifically an AI lawsuit, the legal battle between Intel and VLSI highlights the increasing complexity of intellectual property in the tech sector, including patents that may relate to AI components.
AI responses may include mistakes. For legal advice, consult a professional. Learn more