Fri. Dec 6th, 2024

Why Identifying AI-Created Songs Is Crucial for Market Transparency (Guest Column)

Why Identifying AI-Created Songs Is Crucial for Market Transparency (Guest Column)

A few weeks back, a member of the team at my company, Ircam Amplify, joined one of the multiple AI music generators available online and input a brief prompt for a song. Within minutes, a new track was generated and promptly uploaded to a distribution platform. In just a couple of hours, that song, in which no human creativity played a part, was available on various streaming platforms. We diligently took action to remove the track from all of them, but the experiment highlighted a significant point. 

It is now that simple! My aim here is not to pass judgment on whether AI-generated music is a good or a bad thing — from that perspective, we are neutral — but we think it is important to emphasize that, while the process is easy and cost-effective, there are absolutely no safeguards currently in place to ensure that consumers know if the music they are listening to is AI-generated. Consequently, they cannot make an informed choice about whether they want to listen to such music. 

Related

With AI-generated songs inundating digital platforms, streaming services require vast technological resources to manage the volume of tracks, diverting attention away from the promotion of music created by “human” artists and diluting the royalty pool. 

Like it or not, AI is here to stay, and more and more songs will find their way onto streaming platforms given how quick and easy the process is. We already know that there are AI-generated music “farms” flooding streaming platforms; over 25 million tracks were recently removed by Deezer, and it is reasonable to speculate that a significant proportion of these were AI-generated. 

In the interest of transparency, consumers surely deserve to know whether the music they are tuning into is the genuine product of human creativity or derived from computer algorithms. But how can AI-generated tracks be easily distinguished? Solutions already exist. At Ircam Amplify, we offer a series of audio tools, from spatial sound to vocal separator, that cover the full audio supply chain. One of the latest technologies we have launched is an AI-generated detector designed to help rights holders, as well as platforms, identify tracks that are AI-generated. Through a series of benchmarks, we have been able to determine the “fingerprints” of AI models and apply them to their output to identify tracks coming from AI-music factories.  

The purpose of any solution should be to support the whole music ecosystem by providing a technical answer to a real problem while contributing to a more fluid and transparent digital music market. 

Discussions around transparency and AI are gaining traction all around the world. From Tokyo to Washington, D.C., from Brussels to London, policymakers are considering new legislation that would require platforms to identify AI-generated content. That is the second recommendation in the recent report “Artificial Intelligence and the Music Industry — Master or Servant?” published by the UK Parliament. 

Related

Consumers are also demanding it. A recent UK Music survey of more than 2,000 people, commissioned by Whitestone Insight, emphatically revealed that more than four out of five people (83%) agree that if AI technology has been used to create a song, it should be distinctly labeled as such. 

Similarly, a survey conducted by Goldmedia in 2023 on behalf of rights societies GEMA and SACEM found that 89% of the collective management organizations’ members expressed a desire for AI-generated music tracks and other works to be clearly identified. 

These overwhelming numbers tell us that concerns about AI are prevalent within creative circles and are also shared by consumers. There are multiple calls for the ethical use of AI, mostly originating from rights holders — artists, record labels, music publishers, collective management organizations, etc. — and transparency is usually at the core of these initiatives. 

Simply put, if there’s AI in the recipe, then it should be flagged. If we can collectively find a way to ensure that AI-generated music is identified, then we will have made serious progress towards transparency. 

Nathalie Birocheau currently serves as CEO at Ircam Amplify and is also a certified engineer (Centrale-Supélec) and former strategy consultant who has led several major cultural and media projects, notably within la Maison de la Radio. She became Deputy Director of France Info in 2016, where she led the creation of the global media franceinfo.

By Michael

Related Post