Renowned musicians including Elton John, Billie Eilish, Paul McCartney, Radiohead, Sting, and Dua Lipa have collectively raised objections to the use of their music in training artificial intelligence (AI) large language models (LLMs). These concerns are part of a broader protest among creators over how AI systems are being developed and the implications for artistic rights and royalties.

LLMs are advanced AI tools that learn to generate content by processing vast amounts of data collected from the internet. This data is sorted and analysed through a method known as Transformer Architecture, which allows the AI to understand and produce human-like outputs. In the case of music, the data undergoes additional transformation into symbolic music representations for the AI to interpret. Musicians and industry advocates are increasingly alarmed at the use of their copyrighted works in this training process without authorisation, which they argue diminishes their control over their creations and threatens their income streams.

A notable demonstration of opposition came from over 1,000 British musicians—among them esteemed artists such as Kate Bush, Cat Stevens, and Annie Lennox—who released a silent album entitled “Is This What We Want?”. The album’s track titles collectively conveyed a message protesting the UK government’s proposed legislation on AI and copyright, which the artists fear could legalise the unauthorized use of music by AI companies. Beyond this, hundreds of music professionals have signed open letters condemning what they describe as “predatory” and “irresponsible” exploitation of their work to train AI models. These efforts emphasise that such practices threaten the value of human creativity and disrupt the established music ecosystem.

Several complex copyright questions underpin this debate. Key issues include whether the act of training AI models on copyrighted music constitutes infringement. Some jurisdictions, such as Singapore and the European Union, have enacted laws that explicitly permit this to encourage AI investment, while others, including the UK and Hong Kong, are considering ‘opt-out’ exceptions that would allow rights holders to exclude their works from AI training datasets. However, implementing these systems effectively is challenging, particularly since most works are already publicly available without machine-readable opt-out metadata.

Another area of contention lies in the interpretation of ‘fair use’, a doctrine with broader application in the United States that has sparked numerous legal disputes concerning AI training material. Central to these discussions is the Berne Convention’s three-step test, which assesses whether unauthorised uses conflict with normal exploitation of the work and unreasonably prejudice the author’s legitimate interests. Determining the applicability of this test to AI training is likely to require judicial assessment.

Moreover, musicians have expressed alarm about the outputs generated by AI tools. AI-produced music often mirrors existing styles or replicates distinctive voices, raising concerns about originality, the eroding value of human artistry, and the potential loss of royalty income. Instances of AI-generated deepfake songs—such as an AI cover of Taylor Swift’s “That’s So True (Piano Version)”—illustrate the sophistication of these technologies and amplify worries about intellectual property rights.

Legal challenges have already been mounted across various regions. Major record companies including Universal Music Group, Sony Music Entertainment, and Warner Records have initiated copyright infringement lawsuits targeting AI music firms like Suno and Udio. These cases allege that the AI companies trained their models on copyrighted sound recordings without permission, seeking substantial damages on the basis that this activity constitutes widespread unlicensed copying outside the scope of fair use. The AI companies counter that such use qualifies as fair use or intermediate processing, permitted under certain US laws, and argue that record labels are trying to monopolise the market and stifle innovation. Evidence submitted by record labels has pointed to AI-generated songs bearing striking similarity, even near-identicality, to copyrighted tracks.

In response to these challenges, some musicians have adopted technical measures to disrupt AI training efforts. One approach involves embedding subtle and inaudible modifications into recordings that do not affect human listeners but confuse AI models attempting to learn from them. American electronic musician Benn Jordan, also known as The Flashbulb, advocates for this method through a tool called “Poisonify.” Industry groups are also calling for greater transparency from AI developers about their training datasets and for the creation of licensing frameworks to ensure creators receive fair compensation, potentially extending traditional music licensing models—including performance, mechanical, synchronisation, and communication licences—into the AI era.

A distinct yet related legal issue concerns the copyright status of AI-generated content itself. Since AI lacks human authorship, works produced solely by AI may fall outside the scope of copyright law, affecting the commercial value and protection of such creations. The widespread adoption of cost-effective AI-generated content raises questions about the future role and sustainability of human musicians.

Throughout the 21st century, musicians have navigated significant challenges, including the advent of peer-to-peer file sharing platforms such as Napster and Pirate Bay, the rise of streaming services, and ongoing struggles over royalty fairness. The emergence of AI in creative domains represents the latest and arguably most complex battleground for artists seeking to protect their rights and livelihoods amid rapid technological change.

The information discussed is drawn from an analysis published by Mondaq and intends to provide an overview without offering legal advice. Musicians and stakeholders facing specific concerns are advised to seek specialised counsel tailored to their individual circumstances.

Source: Noah Wire Services