Will AI destroy the music industry?

Do beautiful songs require human creativity?

Digitally-generated sound wave.
(Image credit: Yaroslav Kushta/Getty Images.)

"For as long as humans have been making music, we've been trying to find ways for music to make itself," classical music critic Michael Andor Brodeur wrote for The Washington Post in February. First, there were the Aeolian harps of ancient Greece, which sang and hummed using just the wind's breeze. Next came mechanical organs, which used "the force of flowing water to draw breath into their bellows." Now, 300 years and innumerable "clunky crank-powered, disc-driven," and "pneumatic musical diversions later," we find ourselves on the precipice of a new era of self-generated tuneage: that which is created by AI.

It's a curious and layered shift, one preceded by the dawn of TikTok and the rise of bedroom pop, whereby any teenager with a computer and a dream can self-publish hymns from the comfort of their home. And though it nonetheless requires a degree of human intervention (at least for now), all this newfangled software promises to make composition even easier — take the viral hit "Heart on My Sleeve," a purported Drake and The Weeknd collab built using AI-generated versions of the stars' voices, for example. Nearly indistinguishable from something either artist would release himself, the track prompted condemnation from the pair's label, Universal Music Group, which eventually cried copyright infringement and strongarmed streaming platforms into its removal. The debate goes beyond pop radio, too: "Royalty-free music generators can be used now to compose a rap beat, a commercial jingle or a film score, cutting into an already fragile economy for working musicians," says Joe Coscarelli at The New York Times. While some proponents — including those who own and build the tools required to create AI melodies — view this next chapter as a chance to better the accessibility of music-making, critics fear such software will cheapen the art form and render all human-generated hooks obsolete. How dangerous of a game are we playing?

What are the commentators saying?

Singer-songwriter John Legend is under no illusions AI's time in the sun will be short-lived. But he is also of the mind that laws should be changed to better protect artists. "AI's going to be a part of our lives, and it's going to be something that kind of augments our own intelligence, our own collective intelligence. And that's fine," he said in April. "But when it comes to using someone's likeness, using their intellectual property, I believe our rights should still be protected." Fellow musician Grimes, however, is embracing the threat openly, in hopes it will pay dividends (literally) down the line. "I'll split 50 percent royalties on any successful AI-generated song that uses my voice," she tweeted in the wake of "Heart on My Sleeve." "Same deal as I would with any artist I collab with. Feel free to use my voice without penalty. I have no label and no legal bindings."

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.


Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

Other industry insiders worry AI not only removes a degree of human connection from a song's lyrics but will also prove detrimental to the songwriting process itself. "My fear is that we're losing being in a room together; talking about a song; talking about a drumbeat; talking about a feeling," producer Garth Richardson told Kelly Bishop, in a piece for Vice. "With AI creation it's completely gone – it's like COVID-19 times ten." Still, current software offerings seem a long way off from successfully mimicking the nuances of human composition; maybe that gap is proof the problem is, at least not yet, as bad as it seems. "Can a machine ever truly feel such torment to the extent that it will create the kind of song that saves lives? I don't think so," Bishop says. "AI isn't going to replace living artists that bleed for their art, the type whose work derives from the horror of the human condition because robots can't feel pain – yet."

And believe it or not, there could also be some pros here. "We're making it so that you don't have to spend 10,000 hours and thousands of dollars buying equipment to share and express your ideas," Drew Silverstein, CEO of Amper Music, an AI songwriting tool, told CNN in 2018. Indeed, AI can be a great tool for up-and-coming artists who "can't afford expensive beat lease licenses or a music producer/sound engineer," added Tao Romera, COO of Soundraw, a company that uses Al to generate royalty-free music, speaking with Complex. "So rather than damaging the artists and the music industry, I think it's going to have a very positive impact on it by allowing many more people to become artists and democratizing music creation."

What's next?

Rob Abelow, founder of the blog Where Music's Going, is predicting a few areas of noticeable change, song quality and consumer focus among them. "There will be a flood of mediocre, indistinguishable, unoriginal music. Emphasis on context- and mood-based music, ambient, sleep, lo-fi, instrumental," he told Complex. "'Good enough' music, which has become plentiful in the last 5-10 years, will now go from the long-tail to the generative AI."

There is also the possibility that copyright rules are adapted to fit the changing landscape, considering "robots have traditionally been exempt" from liability under the "fair use" doctrine, the Times notes. Any new rules would likely cover how copyrighted works are used during machine learning, with issues of liability hinging on intent. If a machine is copying art or material solely to learn, that could qualify as fair use, or perhaps "fair learning," a new standard for the AI era suggested by Stanford Law School's Mark Lemley. But "if the intent is to reproduce the work, it will not," the Times summarizes. (A group of artists is currently tied up in a lawsuit with Stability AI, an image-generating software company, over this very issue.)

Ultimately, there could be a way to do this right, Abelow writes on his blog. We're not there yet, but if "we make it easy for users & platforms to use legally sourced, licensed models that compensate original artists, we create a win-win."

Continue reading for free

We hope you're enjoying The Week's refreshingly open-minded journalism.

Subscribed to The Week? Register your account with the same email as your subscription.

Brigid Kennedy

Brigid is a staff writer at The Week and a graduate of Syracuse University's S.I. Newhouse School of Public Communications. Her passions include improv comedy, David Fincher films, and breakfast food. She lives in New York.