The Spotify AI Epidemic: How Artificial Intelligence is Changing the Music Industry
Written by Alex Fisher
Thumbnail & Banner Photo by Possessed Photography on Unsplash.com
If you’re like me, you’ll often find yourself listening to music while walking between classes, during study time, or simply to help you relax. Music can be extremely powerful: it can help you relieve stress and anxiety, process all sorts of emotions, or even help treat brain injuries. It is also one of humanity’s oldest art forms. Music has an incredibly rich history, with rudimentary instruments having existed for more than forty thousand years.
What happens when you take the human element out of music, however? What does this ancient human art become when it is generated by an artificial intelligence?
You might listen to your music on Spotify. After all, it is a very convenient and relatively cheap way (especially with the student discount) of listening to music, and it even has some integrated social features that the Journal has touched upon in a previous article. However, if you’re an avid listener of the platform’s personalized Release Radar and Discover Weekly playlists, then lately you may have noticed a surge in short, similar-sounding songs with robotic-sounding singers. You’re not alone in this: listeners from across the internet have been reporting an uptake in artificially generated music flooding their Discover Weekly and Release Radar, stopping them from discovering new music created by real, human artists. So what exactly is going on here?
The popular music streaming app is no stranger to artificial intelligence. The platform introduced the ‘DJ’ AI back in February of 2023 and added the ‘AI Playlist’ feature in mid-2024. More recently, Spotify came under flak for the use of AI in the annual Spotify Wrapped. This criticism was especially poignant, as it came shortly after the company laid off seventeen percent of its workforce. Gustav Söderström, co-president of the company, has even expressed that AI-generated music is welcome on the platform so long as it obeys copyright laws. This is something that tends to be questionable at best: these AI models can often be trained on copyrighted material without the permission to do so, and there have been several cases of famous musicians being artificially imitated without the approval of the artists or their record labels.
Given Spotify’s stance on artificial intelligence, it comes as no surprise that music created by generative AI has become increasingly common on the platform, causing real artists to lose their income to machine-made music. Online guides such as this one even give simple instructions on how to generate music with AI and upload it to various streaming platforms. These AI-made songs have become so numerous that in May 2023, tens of thousands of artificially generated songs were removed from Spotify due to artificial streaming, totaling a mere seven percent of what had been created by only one AI music service.
While the time it takes to write, record, and produce a song varies greatly, we can make a general assumption to determine how much work this would be. If we assume a typical song takes six hours to make, this seven percent equates to between tens of thousands to hundreds of thousands of hours of work that could have been spent by real people. While on the surface this may sound like a good thing—allowing people to save time when creating music—in reality, this can practically destroy the creative process. Ethical concerns, the ‘quantity-over-quality’ mindset, and the lack of emotional depth all act to take the art of music and turn it into nothing more than an algorithm that can be spat out by typing a few words into a machine.
Another quandary that must be addressed is the environmental impact of this generated content. This is a topic that the Journal has discussed before in greater detail, but to summarize, training just a single AI model once can generate five times as many carbon emissions as a car over its entire lifetime. Considering that these models will need to occasionally be retrained with more up-to-date data, the environmental impact of them is startling. With over twenty options for AI music generation services available, this equates to thousands of tonnes of carbon dioxide waste—and that’s only considering the environmental cost of training these AI models, not the costs of continuing to run and retrain them.
The ethical concerns over artificially generated music don’t end there, however. Between May and July of 2024, the music rights management organization APRA AMCOS released a study conducted on the topic of AI music within the Australian and New Zealand music industry. This study presents a comprehensive breakdown of the market size and usage of generative AI tools, but the overall results are striking. While just over half of the artists surveyed believe that these tools can be used to assist in the creative process, over eighty percent of respondents stated their concerns over how AI music could affect their income to the point that creating music may no longer be a sustainable career.
Aboriginal Australian artists are at an even greater risk when it comes to AI-generated music. Of the Aboriginal Australian musicians that responded to the APRA AMCOS survey, between sixty-seven and eighty-four percent expressed concerns over losing cultural rights. Furthermore, upwards of eighty-nine percent of these respondents believed that AI music may cause an increase in cultural appropriation. This establishes a worrying precedent for the future of traditional and cultural music.
Depending on the dataset used to train a given generative artificial intelligence, biases and stereotypes within the generated music may present another concern. If the datasets used contain these biases in any significant capacity, then there is the possibility of racial or cultural stereotypes being present in the final product. While the generated music or the dataset can be screened for these biases so that they may be removed from the AI model’s knowledge base, there remains the question of whether or not this screening will be done, and to what extent, to ensure that these negative and harmful influences are properly removed.
One of the factors that supporters of AI-generated music often bring up is how it increases the democratization of music. While making music production more accessible to the average person is undoubtedly a good thing, and the precedent of technology increasing accessibility to music—including by artificial intelligence—has already been established, the nature of AI music leaves doubts on the artistic integrity of the generated content. Whether or not music created by artificial intelligence can be considered ‘real’ music—and whether AI-generated art in general can be thought of as true art—is still debated, with excellent points coming from both sides of the argument.
Regardless of whether or not music generated by artificial intelligence is ‘real’ music, the potential for biases to be perpetuated or artists to lose money—especially traditional and Aboriginal musicians—is very real. The position that Spotify seems to have taken, embracing AI over human workers and artists, sets a concerning precedent given their position as an industry leader in music streaming. Will the future of music be defined by computer-generated algorithms and artificial singers? Or will it remain a truly human art form, continuing to build on nearly forty thousand years of history?
What are your thoughts on AI-generated music and its impact on the music industry? Let us know in the comments on our socialmediapages.