It sounds like a horrible Black Mirror episode or a joke.
Two consecutive albums of generic psych-rock songs were published by a band consisting of four guys with shaggy hair. Both third-party playlists with hundreds of thousands of followers and Spotify customers’ Discover Weekly feeds featured the tracks. The band wasn’t real, but its song had received millions of streams in a few of weeks. An artificial intelligence-powered “synthetic music project” was produced.
Nearly as fast as the controversy around The Velvet Sundown gained momentum, it spiraled out. In an effort to troll journalists, someone posing as a member of the band talked to media publications, such as Rolling Stone, about the use of AI before confessing to lying about the entire situation. Later, the official Velvet Sundown page acknowledged that all of the music was created and voiced by artificial intelligence in an update to its Spotify biography.
“This isn’t a trick — it’s a mirror,” the statement continues. “An ongoing artistic provocation designed to challenge the boundaries of authorship, identity, and the future of music itself in the age of AI.”
Similar to all previous technological developments, artificial intelligence has generated both excitement and anxiety regarding its potential to change the music business. Its practical applications range from full-blown deceit, as in The Velvet Sundown, to assisting human artists in restoring audio quality (like the remaining members of The Beatles did with John Lennon’s old voice tracks on the Grammy-winning single “Now and Then”).
With 696 million users in more than 180 markets, Spotify is the most widely used streaming service worldwide. Spotify CEO Daniel Ekhas has expressed his hope that artificial intelligence (AI) will enable the company’s algorithm to better match users with what they’re looking for in interviews and podcasts. He hopes to deliver “that magical thing that you didn’t even know that you liked better than you can do yourself,” as he told The New York Post in May. Spotify introduced an AI DJ in 2023 that offers a combination of comments and suggestions. An AI tool for translating podcasts into other languages is also available on the platform.)
Additionally, Ek has stated unequivocally that AI should support human creators rather than take their place. However, Spotify is not yet taking action to mark AI-generated video, in contrast to other digital behemoths like YouTube, Meta, and TikTok. So why doesn’t the biggest streaming service in the world notify customers whether the content they’re listening to was created using artificial intelligence? And what problems does that present for fans and artists alike?
A Spotify representative did neither confirm or refute the notion when asked if the company had thought about putting in place a detection or tagging system for music produced using artificial intelligence (AI) or what difficulties might come from doing so.
“The tools that musicians use to create are not regulated by Spotify. In a written comment to NPR, a Spotify representative stated, “We think artists and producers should have control.” “We aggressively strive to prevent fraud, impersonation, and spam, and our platform policies center on how music is presented to listeners. Content that violates the rights of artists, misleads listeners, or misuses the platform will be removed or punished.
Generative AI and ghost artists
After Universal Music Group claimed copyright violations in 2023, Spotify and other services took down a song that employed artificial intelligence (AI) to mimic Drake and The Weeknd’s voices without the artists’ consent. However, a new album was posted on July 14th, and The Velvet Sundown’s profile is still active. The page isn’t technically breaking any guidelines because it isn’t posing as an established artist. However, there wouldn’t be any indication that the voice they’re listening to isn’t that of a real person if one of its songs appeared on a user’s Discover Weekly playlist, which is one of Spotify’s automated playlists that receive millions of plays each week.
Journalist Liz Pelly, author of Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist, claims that for almost ten years, streaming services have struggled with transparency and that consumers need to have a better idea of what they’re consuming and where it’s coming from.
“In order for users of these services to make informed decisions and in order to encourage a greater sense of media literacy on streaming, I do think that it’s really important that services are doing everything they can to accurately label this material,” adds Pelly. “Whether it’s a track that is on a streaming service that is fully made using generative AI, or it’s a track that is being recommended to a user because of some sort of preexisting commercial deal that allows the streaming service to pay a lower royalty rate.”
The CEO of Spotify, Ek, has praised AI for making music production easier and decreasing the entry barrier into the industry. However, AI-generated music may also slash licensing fees and streaming services’ overall payout costs. According to Pelly, Spotify has a history of searching for the least expensive content to offer its subscribers.She discovered throughout her reporting that Spotify already uses background music produced in bulk by production businesses to fill up their playlists. She claims that the emergence of AI-generated music is a challenge for IT firms trying to increase streams and reduce expenses.
“Spotify prioritizes listener satisfaction, and there is a demand for music to suit certain occasions or activities, including mood or background music,” a Spotify representative told NPR in response to inquiries regarding this practice and the financial ramifications. Just a tiny percentage of the music on our platform is represented by this type of content. This music is licensed by rightsholders, just like any other music on Spotify, and the conditions of each deal differ. Spotify has no control over how musicians display their work, including whether they choose to distribute their songs under a band name, real name, or a pseudonym.
One platform is already doing it
The first AI identification and tagging system implemented by a major music streaming provider was introduced by Deezer in June. Established in Paris in 2007, the platform has been closely monitoring technology advancements that enable AI models to generate increasingly realistic-sounding tunes.
The tool took his team two and a half years to develop, according to Manuel Moussallam, head of research at Deezer. Additionally, they released a study admitting that the tool can be circumvented because it mainly targets waveform-based generators and can only identify music produced by specific tools.
“We started seeing [AI] content on the platform, and we were wondering if it corresponds to some kind of new musical scene, like a niche genre,” says Moussallam. “Or if there were also some kind of generational effect like are young people going to switch to this kind of music?”
That hasn’t been the case thus far, he claims. According to the tool, almost 20% of the daily music submitted to Deezer—or close to 30,000 tracks—are artificial intelligence (AI) created. However, a large portion of it is just spam, according to Moussallam. To determine how many people were naturally streaming this content, Deezer eliminated AI-generated tracks from both automatic and editorially curated playlists when they were detected. They discovered that almost 70% of the streams were bogus, which means that in order to get paid, they made fake musicians and utilized bots to create fake streams. When false streams are discovered, Deezer stops paying royalties. According to the corporation, the revenue dilution associated with AI-generated music—that is, streams of actual people listening to this content—is less than 1%.
“The only thing that we didn’t really find is some kind of emergence of organic, consensual consumption of this content,” adds Moussallam. “It’s very amazing. The number of AI-generated tracks has significantly increased, but the number of actual users streaming this content has not increased.
He claims that AI-generated content, such as The Velvet Sundown, experiences a brief increase in listenership during periods of media attention before soon waning as listeners get over the novelty.
Who’s responsible?
It’s crucial to remember that not all AI use is overtly harmful, according to Hany Farid, a professor of digital forensics at the University of California, Berkeley. There are numerous ways that artists might utilize AI to improve or augment their work, but transparency is essential when using AI in and outside of the music industry.
“I can get a wide variety of foods when I visit the grocery shop. I consider some of it to be healthy, while others are not. “We’re going to label food to tell you how healthy or unhealthy it is, as well as how much sugar, sodium, and fat it contains,” Farid explains. It’s not a judgment on values. We’re not telling you what you can or cannot purchase. We’re just telling you.”
Relying on the grocery analogy, Farid asserts that the manufacturer of the products bears responsibility for the labels, not the store. Similarly, on social media platforms, he says the burden to disclose AI usage should ideally be on the shoulders of whoever uploads a song or image. But because tech companies rely on user-generated content to sell ads against and because more content equals more ad money there aren’t many incentives to enforce that disclosure from users or for the industry to self-police. Like with cigarette warnings or food labels, Farid says, the solution may come down to government regulation.
“There’s responsibility from the government to the platforms, to the creators, to the consumers, to the tech industry,” says Farid. “For example, you could say, somebody created music, but they used [an AI software tool]. Why isn’t that tool adding a watermark in there? There’s responsibility up and down the staff here.”
AI-generated models move at such a fast pace, Farid says, it’s difficult to give people guidance on how to identify deepfakes or other AI-generated content. But when it comes to listening to music, he and Pelly suggest going back to the basics.
“If music listeners are concerned with not accidentally finding themselves in a situation where they are listening to or supporting generative AI music, I would say the most direct thing to do is go straight to the source,” says Pelly, “whether that be buying music directly from independent artists and independent record labels, getting recommendations not through these anonymous algorithmic news feeds, and investing in the networks of music culture that exist outside of the centers of power and the tech industry.”
Copyright 2025 NPR