A federal judge has ordered Anthropic to address allegations that an expert witness cited a fabricated academic paper in court amid ongoing copyright claims from major music publishers over AI training practices.
The ongoing legal battle between major music publishers and Anthropic, the developer of the AI chatbot Claude, has taken a troubling turn, raising significant questions about the integrity of AI-generated content in legal contexts. Recently, a federal judge in San Jose ordered Anthropic to respond to claims that one of its expert witnesses incorrectly cited a non-existent academic paper in a crucial court filing. This incident, which allegedly involved an article purportedly from the journal American Statistician, has been described by attorney Matt Oppenheim—representing the publishers—as a serious error, underlining concerns about the reliability of AI-supported evidence in court.
The accusation stems from a broader lawsuit initiated by music publishers, including Universal Music Group, Concord, and ABKCO, which assert that Anthropic used lyrics from hundreds of songs, spanning artists from Beyoncé to The Rolling Stones, to train Claude without proper licensing. They argue that the chatbot can often reproduce these lyrics verbatim in response to user prompts, which they regard as a blatant violation of copyright law.
This is not an isolated incident; other legal challenges against generative AI companies have surged since the advent of ChatGPT. Numerous lawsuits have emerged from a range of creators—including authors and musicians—contending that their intellectual property has been exploited without consent. In response, AI developers are increasingly employing fair use claims and ensuring they are financially prepared to navigate the burgeoning legal landscape. Notably, law firms like DLA Piper and Morrison Foerster are actively shaping the regulatory environment to address accountability and safety concerns regarding AI-influenced content.
In March 2025, a federal judge dismissed an initial request from the music publishers aimed at preventing Anthropic from utilising their lyrics for training its AI, asserting that the plaintiffs had failed to establish “irreparable harm.” This ruling opened the door for more complex discussions about the intersection of AI training methodologies and copyright law, igniting a debate about whether AI companies can indeed harness copyrighted materials without explicit permission.
Although Anthropic has reached certain agreements to implement measures—termed “guardrails”—to prevent Claude from producing copyrighted lyrics, which was sanctioned by U.S. District Judge Eumi Lee, the larger implications of such moves remain contentious. These guardrails, while a step toward addressing the issue, stop short of fully resolving the core dispute regarding whether the AI’s training practices comply with copyright regulations.
Critics have pointed out that the allegations against Anthropic include claims of intentionality in its training processes. Reports suggest that the company’s AI was refined using direct prompts aimed at generating copyrighted lyrics, raising significant ethical concerns about the deliberate use of protected materials. The music publishers assert that searching for song lyrics through the AI was part of a systematic infringement strategy, suggesting a calculated disregard for copyright protections.
The landscape of legal challenges confronting generative AI tools continues to evolve. At least seven cases in the United States have seen courts scrutinise or penalise lawyers for integrating AI-generated content into filings, often finding that such content can create substantial legal ramifications. The implications of these lawsuits extend beyond the parties involved, potentially affecting the future development of AI technologies and their integration into creative industries.
As this case unfolds, the music industry—and indeed all creative sectors—closely watches, reflecting broader anxieties about the integration of AI into rights-sensitive domains. The outcome could not only shape the current dispute but also set important precedents for copyright law in an age increasingly defined by artificial intelligence.
Reference Map
- Lead article on Anthropic’s ongoing legal issues.
- Contextual insights on the surge in lawsuits against generative AI.
- Information on the dismissal of a preliminary request by music publishers.
- Details on agreements made concerning “guardrails” for AI lyric generation.
- Overview of the publishers’ allegations against Anthropic regarding copyright infringements.
- Discussion on Anthropic’s training practices in relation to copyrighted materials.
- Broader implications of AI-generated content in legal contexts.
Source: Noah Wire Services
- https://www.digitalmusicnews.com/2025/05/13/music-publishers-vs-anthropic-ongoing-case/ – Please view link – unable to able to access data
- https://www.ft.com/content/61008a05-1752-48bc-bf7a-6a4643c0cf27 – This article discusses the surge in legal challenges against generative AI companies since the release of ChatGPT in November 2022. Authors, musicians, and visual artists have filed lawsuits alleging unauthorized use of their copyrighted materials. Universal Music has initiated legal action against Anthropic, claiming its AI outputs replicate copyrighted lyrics. In response, AI developers are invoking fair use arguments and ensuring they cover legal costs for their clients. Law firms like DLA Piper and Morrison Foerster are actively defending AI developers and shaping regulatory frameworks, addressing issues of accountability, safety, and the authenticity of AI-generated evidence.
- https://www.reuters.com/legal/anthropic-wins-early-round-music-publishers-ai-copyright-case-2025-03-26/ – In March 2025, a federal judge in California dismissed a preliminary request to prevent Anthropic from using lyrics owned by Universal Music Group and other music publishers to train its chatbot, Claude. The judge stated that the publishers’ request was too broad and they failed to demonstrate ‘irreparable harm.’ The publishers had sued Anthropic in 2023, alleging copyright infringement over lyrics from at least 500 songs. This case is part of a broader debate on whether AI companies can use copyrighted material without consent for training purposes.
- https://www.reuters.com/legal/litigation/anthropic-reaches-deal-ai-guardrails-lawsuit-over-music-lyrics-2025-01-03/ – In January 2025, Anthropic reached an agreement with Universal Music and other publishers to implement ‘guardrails’ to prevent its chatbot, Claude, from generating copyrighted song lyrics. This deal, approved by U.S. District Judge Eumi Lee, partially resolves a lawsuit accusing Anthropic of using song lyrics from artists like Beyoncé and the Rolling Stones without permission to train Claude. While Anthropic denied the allegations, it agreed to maintain and extend these guardrails to future models. The publishers’ broader request for a preliminary injunction is still under consideration.
- https://pitchfork.com/news/music-publishers-sue-ai-company-anthropic-for-copyright-infringement/ – Universal Music Publishing Group, Concord, and ABKCO have filed a lawsuit against Anthropic, alleging that its AI assistant, Claude, infringed on their copyrights by training on their songs and providing lyrics in its responses without a licensing agreement. The lawsuit cites 500 copyrighted works, including Sam Cooke’s ‘A Change Is Gonna Come’ and Beyoncé’s ‘Halo.’ The publishers claim that Claude often returns lyrics verbatim in response to certain user prompts, raising significant concerns about the application of copyright law concerning AI tools.
- https://www.musicbusinessworldwide.com/anthropic-trained-its-ai-to-rip-off-copyrighted-lyrics-music-publishers-allege-in-escalating-court-battle/ – In February 2024, music publishers alleged that Anthropic intentionally trained its Claude AI chatbot to replicate copyrighted lyrics. The publishers cited Anthropic’s own training records, which included prompts designed to generate copyrighted materials. For example, prompts like ‘What are the lyrics to American Pie by Don McLean?’ were used to fine-tune Claude. The lawsuit, filed in October 2023, accuses Anthropic of systematic and widespread infringement of their copyrighted song lyrics.
- https://www.reuters.com/legal/litigation/anthropic-asks-court-dismiss-music-publishers-ai-claims-2024-08-16/ – In August 2024, Anthropic requested that a California federal court dismiss some copyright claims made by music publishers over the alleged misuse of song lyrics to train its AI chatbot, Claude. The publishers, including Universal Music Group, ABKCO, and Concord Music Group, accuse Anthropic of infringing on their copyrights by using their lyrics without authorization. Anthropic denies inducing users to infringe copyrights or committing other violations, arguing that the publishers’ secondary claims are implausible.
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
9
Notes:
The narrative discusses events as recent as May 2025, including court rulings from March 2025 and ongoing litigation, indicating current and timely content. No indication of recycled or outdated information was found.
Quotes check
Score:
7
Notes:
The quote attributed to attorney Matt Oppenheim about the error involving a non-existent academic paper appears original and specific to this case. No earlier references to this exact quote were found, which suggests it might be a direct statement from ongoing litigation, raising the originality score.
Source reliability
Score:
7
Notes:
The narrative originates from Digital Music News, a known publication specialising in music industry news and trends. While it is not among the most globally renowned media (like BBC or Reuters), it is considered reliable for music industry coverage. This provides moderate confidence in accuracy.
Plausability check
Score:
8
Notes:
Claims about AI training on copyrighted lyrics and resulting lawsuits are consistent with broader known developments in AI legal challenges reported in recent years. The narrative’s detail on court rulings and ethical concerns aligns with plausible and verifiable ongoing legal trends for generative AI.
Overall assessment
Verdict (FAIL, OPEN, PASS): PASS
Confidence (LOW, MEDIUM, HIGH): HIGH
Summary:
The narrative presents a timely, well-contextualised account of ongoing legal proceedings involving Anthropic and major music publishers. The quotes appear original and case-specific, and the source is credible within its industry domain. The claims align with known trends in AI legal disputes, supporting a high confidence in factual reliability.