The ongoing legal battle between major music publishers and Anthropic, the developer of the AI chatbot Claude, has taken a troubling turn, raising significant questions about the integrity of AI-generated content in legal contexts. Recently, a federal judge in San Jose ordered Anthropic to respond to claims that one of its expert witnesses incorrectly cited a non-existent academic paper in a crucial court filing. This incident, which allegedly involved an article purportedly from the journal American Statistician, has been described by attorney Matt Oppenheim—representing the publishers—as a serious error, underlining concerns about the reliability of AI-supported evidence in court.

The accusation stems from a broader lawsuit initiated by music publishers, including Universal Music Group, Concord, and ABKCO, which assert that Anthropic used lyrics from hundreds of songs, spanning artists from Beyoncé to The Rolling Stones, to train Claude without proper licensing. They argue that the chatbot can often reproduce these lyrics verbatim in response to user prompts, which they regard as a blatant violation of copyright law.

This is not an isolated incident; other legal challenges against generative AI companies have surged since the advent of ChatGPT. Numerous lawsuits have emerged from a range of creators—including authors and musicians—contending that their intellectual property has been exploited without consent. In response, AI developers are increasingly employing fair use claims and ensuring they are financially prepared to navigate the burgeoning legal landscape. Notably, law firms like DLA Piper and Morrison Foerster are actively shaping the regulatory environment to address accountability and safety concerns regarding AI-influenced content.

In March 2025, a federal judge dismissed an initial request from the music publishers aimed at preventing Anthropic from utilising their lyrics for training its AI, asserting that the plaintiffs had failed to establish “irreparable harm.” This ruling opened the door for more complex discussions about the intersection of AI training methodologies and copyright law, igniting a debate about whether AI companies can indeed harness copyrighted materials without explicit permission.

Although Anthropic has reached certain agreements to implement measures—termed “guardrails”—to prevent Claude from producing copyrighted lyrics, which was sanctioned by U.S. District Judge Eumi Lee, the larger implications of such moves remain contentious. These guardrails, while a step toward addressing the issue, stop short of fully resolving the core dispute regarding whether the AI’s training practices comply with copyright regulations.

Critics have pointed out that the allegations against Anthropic include claims of intentionality in its training processes. Reports suggest that the company’s AI was refined using direct prompts aimed at generating copyrighted lyrics, raising significant ethical concerns about the deliberate use of protected materials. The music publishers assert that searching for song lyrics through the AI was part of a systematic infringement strategy, suggesting a calculated disregard for copyright protections.

The landscape of legal challenges confronting generative AI tools continues to evolve. At least seven cases in the United States have seen courts scrutinise or penalise lawyers for integrating AI-generated content into filings, often finding that such content can create substantial legal ramifications. The implications of these lawsuits extend beyond the parties involved, potentially affecting the future development of AI technologies and their integration into creative industries.

As this case unfolds, the music industry—and indeed all creative sectors—closely watches, reflecting broader anxieties about the integration of AI into rights-sensitive domains. The outcome could not only shape the current dispute but also set important precedents for copyright law in an age increasingly defined by artificial intelligence.

Reference Map

  1. Lead article on Anthropic’s ongoing legal issues.
  2. Contextual insights on the surge in lawsuits against generative AI.
  3. Information on the dismissal of a preliminary request by music publishers.
  4. Details on agreements made concerning “guardrails” for AI lyric generation.
  5. Overview of the publishers’ allegations against Anthropic regarding copyright infringements.
  6. Discussion on Anthropic’s training practices in relation to copyrighted materials.
  7. Broader implications of AI-generated content in legal contexts.

Source: Noah Wire Services