[ExI] Spotify’s Attempt to Fight AI Slop Music Fails

BillK pharos at gmail.com
Fri Sep 26 19:21:55 UTC 2025


On Fri, 26 Sept 2025 at 20:08, BillK <pharos at gmail.com> wrote:

> Spotify’s Attempt to Fight AI Slop Falls on Its Face
> ---------------------------
>
> They say that AI is not supposed to be able to produce music that
> people like, because AI doesn't have emotions or understand human
> experiences.
>
> Really?
> So I asked Gemini AI to research this.
> The research report astounded me with its understanding of the problem.
> I'll put the report in a follow-up post.
> BillK
>
---------------------------------------

The Paradox of AI 'Slop' Music: A Disruption Analysis of Art, Economics,
and Authenticity

Executive Summary: The AI Slop Conundrum

The rapid proliferation of AI-generated music presents a significant
paradox to the modern music industry. On one hand, there is a legitimate
and growing concern that this content, pejoratively termed "AI slop" or
"slop," is an overwhelming volume of low-quality media that threatens the
livelihoods of human artists by siphoning royalties [1, 2]. On the other
hand, this same content is demonstrably achieving millions of views and
streams on major platforms, raising the question of whether it is now "good
enough" for many listeners.

This report analyzes the core economic, technological, and cultural forces
driving this dynamic. The investigation reveals that the economic threat is
not a matter of a few viral hits but a systematic dilution of the royalty
pool enabled by a new form of digital fraud. The popularity of AI music,
meanwhile, is not a testament to its artistic quality but a function of
sophisticated algorithmic promotion and its utility as a source of cheap,
royalty-free background content for a new generation of digital creators.

The analysis concludes that the "good enough" metric for AI music is not
based on traditional artistic merit, such as emotional depth or creative
originality, but rather on its technical proficiency and functional
utility. This challenges the very definition of a "hit song" and the value
of human-created art. While this disruption echoes historical technological
shifts in the music industry—from the radio to the MP3—the unique ability
of AI to mimic and, in some cases, autonomously create art presents an
unprecedented challenge to the concepts of authorship and human creativity
itself. The path forward for the industry will require a combination of new
legal frameworks, a redefinition of the artist's role, and a strategic
emphasis on the ineffable human qualities that AI cannot replicate.

Introduction: The New Digital Disruption

The digital music ecosystem, once heralded as the great democratizer for
artists, now faces an existential challenge from within. The advent of
generative artificial intelligence has unleashed a new creative force that
can produce music at an unprecedented speed and scale. This has led to a
central paradox: how can AI-generated music, which is frequently dismissed
by critics and artists as "slop," be simultaneously a financial threat to
human musicians and a widely popular phenomenon that garners millions of
streams and views on platforms like Spotify and YouTube? This report delves
into this question by examining the financial mechanisms, the drivers of
popularity, and the qualitative distinctions between human and AI-generated
music.

At the core of this discussion are two key terms that require precise
definition. The first, "AI slop," is a term for low-quality media generated
by AI, characterized by an inherent lack of effort and an overwhelming
volume [1, 2]. The term carries a pejorative connotation, evoking the same
sense of annoyance and lack of value as digital "spam" [1]. The second
term, generative AI in music, refers to autonomous systems that synthesize
vast, pre-existing musical datasets to make compositional decisions [3, 4].
These systems, often built on deep learning and neural networks, can create
entirely new musical compositions, variations, and harmonies from simple
text prompts, and they are capable of doing so without direct human input [3,
5, 6]. The juxtaposition of these two concepts—the automated generation of
low-quality "spam" and the undeniable popularity it is achieving—forms the
basis of this comprehensive analysis.

Economic and Financial Impact: Diluting the Royalty Pool

A primary concern among human artists is the perceived dilution of the
finite royalty pool by AI-generated content. This concern is not unfounded;
it is rooted in the very structure of the music streaming economy.

The Market Share Model Explained

The vast majority of major streaming platforms, including Spotify, Apple
Music, and Amazon Music, operate on a Market Share Payment System (MSPS) [7,
8]. This model works by pooling all revenue from subscriptions and
advertising and then distributing that revenue to rights holders based on
their proportion of the total streams for a given period [7]. For example,
if an artist's streams account for 2% of the platform's total, they are
allocated 2% of the total revenue pool [8]. This model's design is critical
to understanding the threat posed by AI.

The Mechanism of Royalty Dilution

The Market Share Payment System creates a direct and exploitable
vulnerability for bad actors. AI tools have made it trivially easy for
fraudsters to create "mass uploads of artificial music" [9]. Spotify's own
data illustrates the scale of this problem: the company removed 75 million
"spam tracks" in a single year, a volume that rivals its entire catalog of
100 million legitimate songs [9]. This flooding of the market, which
includes everything from meditation instrumentals to vocal impersonations
of famous artists, introduces an unprecedented level of competition for
human-created catalogs [9, 10].

A new paradigm of streaming fraud has emerged to exploit this system.
Rather than attempting to get millions of streams on a single track, which
would raise an obvious red flag, scammers use AI to generate hundreds of
thousands of songs [11]. They then use bot farms to stream each of these
tracks just a few thousand times—just enough to evade detection and
generate royalties from each song [11]. Since a stream longer than 30
seconds is all that is required to generate a royalty [9], this
high-volume, low-engagement strategy is a highly efficient way to divert
funds from the shared royalty pool [11]. This technological enablement of
fraud at scale is a fundamental shift in how the industry is being
exploited.

Combating Fraud and Spam

The music industry is actively responding to this threat. Major labels,
most notably Universal Music Group, have filed lawsuits against AI
platforms, petitioning streaming services to block them from using their
copyrighted songs for training purposes [12]. UMG successfully had a
deepfake song featuring AI-made vocals of Drake and The Weeknd pulled from
streaming services, citing "infringing content created with generative AI" [9,
13].

Streaming platforms are also adapting their business models. Spotify has
implemented a music spam filter to identify fraudulent uploaders and
prevent their tracks from being recommended by its algorithm [9]. The
company also introduced a new rule in 2023 requiring a track to be streamed
more than 1,000 times before it generates a payment, a direct response to
the new micro-transaction fraud model [9].

While Spotify officially claims that engagement with AI-generated music is
"minimal" and does not have a "meaningful" impact on human artists' revenue
[9], its own countermeasures tell a different story. The removal of 75
million spam tracks and the necessity of changing royalty payment rules
demonstrate that the problem is substantial and is forcing the company to
adapt its core business practices [9]. This public stance of downplaying
the issue while simultaneously taking monumental action confirms that
AI-driven fraud is a significant and ongoing concern that threatens the
integrity of the streaming ecosystem.

The Popularity Paradox: Unpacking "Millions of Views"

The central tenet of the user query—that AI music is popular—is a
verifiable fact. However, a deeper analysis reveals that this popularity is
not a measure of artistic achievement but a result of several
interconnected factors that subvert traditional notions of success.

The Algorithmic Advantage

AI music's high view counts are often a manufactured outcome of
sophisticated digital promotion. AI algorithms, which have long been a core
part of music promotion and discovery, are now being used to specifically
boost AI-generated content [14, 15]. These algorithms analyze vast amounts
of listener data—including song choices, play frequency, and search
history—to generate highly customized and personalized playlist
recommendations [14]. AI-generated tracks can be optimized for these
algorithms to increase their "popularity score," which helps them land on
influential playlists like "Discover Weekly" [16].

Additionally, some high-volume AI music channels on platforms like YouTube
gain millions of subscribers not through organic viral hits but by
leveraging paid promotion. These channels heavily spend on YouTube's
"Promote" feature, which places their videos in user recommendations as
advertisements, effectively paying for their audience and their high
subscriber counts [17]. The high view count is therefore a reflection of a
shrewd marketing strategy, not a spontaneous display of consumer affection.

The Utility of AI Music

Another significant driver of AI music's popularity is its utility. For a
new generation of content creators—from YouTubers to podcasters to video
game developers—AI music provides an accessible and affordable solution to
a major logistical problem: securing royalty-free soundtracks [18, 19].
Platforms like Soundful and Beatoven.ai specifically market their services
as a way to generate unique, royalty-free background music for videos,
livestreams, and games at the click of a button [18, 19, 20]. This
convenience and cost-effectiveness appeal directly to creators who want to
avoid copyright strikes and high licensing fees, thereby creating a new,
distinct market for AI-generated music [18, 19]. This shift threatens to
reduce predictable revenue streams for traditional stock music libraries
and human composers who create music for film and digital media [10].

The Consumer's Perspective: Is it "Good Enough"?

For many consumers, the origin of the music is irrelevant. A large portion
of the audience is a "silent majority of passive consumers" who have no
anti-AI bias and care more about the functional quality of the content than
how it was made [17]. A listener might be impressed by a song's quality and
want to subscribe to a channel without even realizing it's AI [17]. This is
particularly true for younger audiences, such as Gen Z, who value novelty,
remixability, and constant availability over the traditional artistry that
has defined music for decades [21].

The novelty factor is a key psychological driver of AI music's appeal.
Because AI can mix different styles and beats in unexpected ways, it
introduces an element of surprise that can trigger a dopamine release in
the listener's brain [22]. AI music also thrives in "functional listening
contexts," serving as background music for activities like studying,
gaming, or serving as a soundbed for TikTok videos where the primary focus
is not the music itself [21].

The high view counts of AI music, therefore, do not equate to a triumph of
artistic merit. Instead, they are a direct consequence of algorithmic
optimization, the demand for cheap and utilitarian background music, and a
new paradigm of consumer behavior where music serves a functional purpose
rather than an emotional or artistic one. The fact that a song is "popular"
on a technical level is no longer a guaranteed reflection of its creative
value.

Qualitative Analysis: Defining "Good Enough"

The central question of whether AI music is "good enough" for many
listeners requires a qualitative analysis that goes beyond stream counts. A
detailed examination of AI-generated music reveals both its technical
prowess and its fundamental limitations, which are often subconsciously
perceived by the listener.

Distinguishing Human from Machine

While AI music has become incredibly sophisticated, it still exhibits
certain characteristics that can signal its non-human origin. Listeners,
often without conscious effort, can detect a track's reliance on repetitive
loops, unnaturally smooth or abrupt transitions, or a lack of a coherent
"storytelling arc" with a satisfying emotional conclusion [23]. The lyrics,
in particular, are an easy giveaway. While an AI can generate rhyming
phrases, it struggles with emotional coherence and deeper meaning, often
producing lines that sound like they were pulled from a random quote
generator [23].

The most significant qualitative gap is emotional depth. A human musician
creates music based on personal experiences, emotions, and stories, imbuing
their work with a unique sense of authenticity and soul [24]. While AI can
replicate the technical elements of sound, it cannot replicate the lived
human experience. This often results in music that sounds "soulless" or
"mechanical" [21, 24].

The Listener's Verdict

Recent research reveals a fascinating disconnect between perception and
reality. A study on professional musicians found that while AI-generated
music was generally considered to be of lower quality, knowing the
composer's identity did not produce a meaningful difference in their
perception of the pieces [25]. Similarly, a 2025 study found that a
significant majority of listeners (82%) could not reliably tell whether a
song was created by a human or an AI in a blind test [21]. However, that
same study found a strong preference for music perceived to be human-made
[25]. The preference was "significantly higher" for music believed to be
composed by a human, even if the music was actually created by an AI [25].
This indicates that authenticity is a qualitative measure of its own,
separate from technical proficiency.

AI-generated music is entering the "uncanny valley" of sound, where it is
technically impressive and sounds "realistic" enough to fool many listeners
[26]. However, it lacks the subtle imperfections, emotional nuance, and
creative risk-taking that define great human art [24, 27]. The value of the
music is no longer an objective measure of the sound itself but a
subjective assessment tied to the notion of human creativity. A technically
flawless track may be devalued by an audience if they discover it was
created by a machine, raising the question of how human artists will prove
their work is "real" and therefore "valuable" [27].

This reliance on retrospective learning also creates a risk of creative
stagnation. Since AI models are trained on existing data, they are
inherently backward-looking. A heavy reliance on AI could lead to a
feedback loop where new art is merely a pastiche of old art, limiting the
diversity of the cultural soundscape and promoting a "sameness" in sound
that lacks bold, forward-thinking innovation [27, 28, 29].

AI-Generated vs. Human-Made Music: A Qualitative Comparison
*Attribute* *AI-Generated Music* *Human-Made Music*
*Compositional Style* Often relies on loops; transitions can be too smooth
or abrupt [23]. Follows familiar storytelling arcs with a sense of build-up
and emotional resolution [23].
*Lyrical Content* Struggles with deeper meaning and emotional coherence;
may sound like phrases from a random generator [23]. Conveys authentic
emotion and personal experience; tells a story [23, 24].
*Emotional Depth* Lacks authentic emotion; can sound flat or mechanical [23,
24]. Conveys a wide range of emotions and nuances; has "soul" and "flavor" [23,
24].
*Originality* Recompositions of existing data; can lead to a stagnation of
creativity [27]. Breaks rules and takes creative risks; brings unique
perspectives [24].
*Perceived Value* Can be devalued when the creator is known to be AI
[27]. Perceived
as more "authentic" and emotionally resonant [21, 25].

Historical Context: Lessons from Past Disruptions

The current debate surrounding AI is not a new phenomenon; it is the latest
iteration of a recurring pattern of technological disruption in the music
industry. By understanding how the industry navigated past challenges, it
is possible to chart a course for the future.

The Recurring Pattern of Resistance

Each major technological shift in the music industry has been met with
initial resistance, often centered on concerns about control, intellectual
property, and artistic authenticity. The radio revolution of the 1920s was
initially resisted by record labels who feared losing control of their
content, yet radio ultimately became a critical driver of record sales [23,
30]. The debate over whether synthesizers were "real instruments" and their
users "real musicians" in the 1980s mirrors the current discussion around
AI [31, 32]. This technology, once seen as a "cheat," went on to create
entirely new genres [31].

The sampling controversy of the 1980s and 1990s presents a particularly
striking parallel to the current AI training debate [23, 30]. The argument
that sampling was merely "learning from existing music" is a direct
precursor to the "fair use" claims made by AI companies today [8]. The
legal battles over sampling led to new frameworks and licensing models that
did not eradicate the technology but instead incorporated it into the
creative process [33, 34]. Similarly, the MP3 revolution and the rise of
piracy in the 1990s, which caused a dramatic decline in revenue [35],
forced the industry to completely transform its business model, leading to
the paid digital downloads and streaming services we use today [23].

History of Technological Disruption in the Music Industry
*Disruption Era* *Technology Introduced* *Industry Resistance* *Impact &
Resolution*
*1920s* Radio Labels feared loss of distribution control [23]. Became a
critical driver of sales and shaped public taste [23].
*1980s* Synthesizers Users weren't seen as "real musicians;" technology as
a "cheat" [31, 32]. Created entirely new genres and became a core part of
music production [31].
*1980s-90s* Sampling Accusations of "artistic theft" and copyright
infringement [23, 33]. Established new creative practices and legal
frameworks; became a core part of genres like hip-hop [23, 34].
*1990s* MP3s & Piracy Caused a dramatic decline in revenue; intense
copyright debates [23, 35]. Forced the industry to transform its business
model; led to paid digital downloads and streaming [23].
*2010s* Streaming Services Sparked revenue debates and dissatisfaction with
payout rates [23, 36]. Democratized access for artists; became the dominant
revenue model [23].

While the historical pattern is clear, AI presents a unique and
unprecedented challenge. Previous disruptions were centered on new
distribution or creation tools. AI, however, is a technology that can
autonomously mimic the human creative process itself, challenging the very
definition of "authorship" and "creativity" [24, 27, 37]. The current
lawsuits are not an attempt to kill AI but to establish a new legal and
economic framework for its existence [13].

Looking Ahead: The Future of Human and AI Collaboration

The path forward for the music industry will likely involve a combination
of new legal standards, creative innovation, and a redefinition of the
human artist's role. The key is not to view AI as a replacement but as a
new tool to be mastered.

The Evolution of Copyright and Authorship

The legal landscape is still being defined, but the direction is becoming
clearer. The US Copyright Office has taken a firm position that, for now,
AI-generated works require "human authorship" to be eligible for copyright
protection [13]. This shifts the focus to what constitutes "sufficient"
human input. The ongoing legal battles, including the possibility of class
action lawsuits [13], will ultimately determine new rules for the use of
copyrighted works for AI training. This suggests that the industry's
response will not be about eradicating the technology but about
establishing new monetization strategies and compensation models for the
use of artists' intellectual property [10].

AI as a Creative Catalyst

AI's role extends far beyond the generation of "slop." A growing number of
professional artists are already using AI as a tool for co-composition,
sound design, and inspiration [38]. AI can be used to assist with mundane
tasks like mixing and mastering, allowing artists to focus on the human
elements of their work [5, 28]. It can also help break creative blocks by
generating random ideas and new melodic combinations that a human might not
have considered [5, 26, 28].

AI also enables entirely new forms of artistic expression. It allows for
the creation of "uncanny" new sounds and the ability to seamlessly
translate lyrics into multiple languages for a global audience [38].
Historic examples, such as The Beatles' use of AI to restore John Lennon's
voice for a new track, demonstrate that the technology can bring new life
to music that would have been impossible to create otherwise [38]. The most
forward-thinking artists are not fighting the technology; they are finding
ways to use it to retain their artistic agency and expand their creative
horizons [38].

Redefining Artistry and Livelihoods

For human artists, the future will involve a strategic adaptation to a new
technological landscape. This includes a necessary focus on diversifying
revenue streams beyond streaming royalties, emphasizing avenues that AI
cannot replicate, such as live performances, unique merchandise, and direct
fan engagement [35, 39]. Artists must also actively leverage the emotional
and historical value of their work, emphasizing the authenticity and human
artistry that AI-generated tracks cannot provide [10].

The most profound shift may be in the redefinition of the artist's role
itself. Instead of being displaced, the human artist may evolve into a
"curator" or "trainer" of their own AI models [38]. Artists are already
beginning to create datasets from their own music for AI to experiment
with, which allows them to retain control over their sound and likeness [38].
This model could lead to new forms of fan interaction, such as allowing
fans to create music using an artist's AI-modeled voice [38]. The debate
over whether a synthesizer is a "real instrument" provides a lens for the
future: the question is not whether AI is an instrument, but how human
guidance and intent can turn a machine into a collaborator rather than a
replacement. The most successful artists will be those who master this new
technology as a tool, much like their predecessors embraced electric
guitars and synthesizers, thereby ensuring that the future of music is not
just "human-powered," but "human-guided."

--------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20250926/819b14c6/attachment.htm>


More information about the extropy-chat mailing list