Take a chance on AI — but protect the musicians

Summary
"My songwriting skills were ‘trained’ on Beatles tracks that I paid for. Tech tools should follow this principle" Björn Ulvaeus
Bjorn Ulvaeus c TT News Agency

This article first appeared in the Financial Times on 4th November, © 2023 The Financial Times Ltd. All rights reserved

 (The writer is a member of Abba, founder of Pophouse Entertainment and president of CISAC)

Expectations were sky-high. It was November 22 1968 when Benny Andersson carefully removed the plastic sleeve from The Beatles “White” album. We were in his small apartment in central Stockholm and we had just gone out and bought it.  He placed it on the turntable and we listened reverently — to every note, every word, every instrument, every voice, every sound. We absorbed everything and when we had finished, we started again. As did millions of others around the world.

The Beatles inspired more young people to start writing songs than any other band in history. It is by listening again and again to the songs you love and admire that you learn. If one day you’re lucky enough to write a hit yourself, you can be certain that all those songs helped you along. They’ve been lingering in your subconscious in some form ever since you heard them for the first time. You could say that you’ve been “trained” by them.

If you actually then make a living as a songwriter you should humbly acknowledge that you stand on the shoulders of others. Apart from admiration and gratitude, in monetary terms what the Beatles got from me were royalties from the records I bought. But now there’s another kind of songwriter in town, even more keen on learning. Deep learning. With a neural network loosely imitating mine but not quite there yet.

Already, a few artificial intelligence models can generate music prompted by paragraphs of text and they’re getting more and more advanced. I was introduced to YouTube’s Music AI Incubator recently and was blown away. Mostly, it was astonishing for what it will be capable of in the future — and it made me realise how urgent finding answers to the emerging rights issues now is. How should original creators be properly remunerated for use of their works to train AI? Should creators have the right to refuse? Who, if anyone, owns the AI output?

Agnetha in ‘Abba Voyage’ Agnetha in ‘Abba Voyage’. Some parts of the AI model are better protected by copyright law than others © ABBA Voyage The AI model has usually been trained on a multitude of individual works, each consisting of many different parts: sound, style, genre, voice, instrumentation, melody, lyrics and more. Some parts are better protected by copyright law than others. For instance, it is not clear whether a celebrated singer, their voice recreated by AI, can control or prevent the exploitation of works which use that recreation.

Protecting song rights (melody and lyrics) needs urgent attention. In almost all cases, training an AI model on unlicensed material is copyright infringement — unless the user can argue that an exception applies. Such exceptions generally allow for non-commercial use: this is not going to be the case for a significant amount of AI music. Who, then, should own it?

Copyright is the creator’s right to prevent their work from being, or to allow it to be, copied. Since the output may not technically contain any of the material that was originally protected, we’re in uncharted territory. As an artist, I think the overarching principle here must be freedom of expression. This may be controversial, but I’m beginning to think we must be open to seeing the prompter, however simple the prompt, as the creator and therefore the author of the output.

Entirely computer-generated output should not, of course, receive copyright protection. But human input should be protected — and my view is that humans will be involved most of the time.

AI tools in the right hands could result in amazing new artistic expression and the creator should have as much freedom as possible. I almost imagine the technology as an extension of my mind, giving me access to a world beyond my own musical experiences. The creator should not be boxed in by complicated rules about how much AI they have used or having to declare exactly who they were inspired by.

Instead I imagine a user entering a series of prompts, trying different styles, inspired by various composers and lyricists, perhaps even using parts of the generated melody and the lyrics. (It’s important to understand that these aren’t copies, they are original compositions).

In tandem with the user’s own input, this process could result in a decent — or even great — song. So how do you assign a percentage to the various different contributions? Even if it was technically possible to trace the origins, the amount of metadata needed to administer payments would be staggering.

When it comes to that, the music industry is in disarray as it is. Possible remuneration solutions are everywhere. One idea is a subscription model. As a songwriter, I would contemplate letting the Incubator (and other AI models) train on my works if I could subscribe to a professional version — and if a portion of this subscription went back to the creative community through publishers and collecting societies. This is clearly doable and should be considered.

The change coming in music, as in society as a whole, is monumental. No one knows what lies ahead. The tech companies will push to monetise and scale AI models rapidly, even as we only begin to understand their uses. But we need original creators to be both protected and fairly remunerated.