Deezer 75000 AI Tracks Per Day Streaming Fraud: 44% of Uploads, 85% Are Bots
Will Lisil

The Numbers That Shocked the Industry
On April 20, 2026, Deezer published a data update that landed like a bombshell across the music business. The French streaming platform revealed it now receives nearly 75,000 fully AI-generated tracks every single day, representing 44% of all daily uploads. That is more than 2 million AI-generated tracks per month. The number has climbed 650% from 10,000 daily AI uploads in January 2025. The Deezer 75000 AI tracks per day streaming fraud data marks the most dramatic escalation yet in a crisis that is reshaping how the music industry thinks about AI, royalties, and platform responsibility. The numbers are staggering.
But the headline figure was not the AI upload volume. The real shock came from what Deezer found when it looked at who was actually listening to those tracks. 85% of all streams detected on AI-generated music were fraudulent — played by automated bots rather than human ears, They are designed to siphon royalty payments away from legitimate artists. The money flows directly into the pockets of fraud operators. "Generating fake streams continues to be the main purpose for uploading AI-generated music," Thibault Roucou, Deezer's director of royalties and reporting, told Music Week in the investigation published on April 22.
Deezer's data provides the most detailed public picture of an industry crisis that has escalated from concern to emergency. The platform's AI detection tool, launched in January 2025 and continuously improved, has now detected and tagged more than 13.4 million AI-generated tracks. These tracks are removed from algorithmic recommendations and editorial playlists, and their fraudulent streams are demonetised. Despite the upload deluge, AI-generated music still accounts for only 1-3% of total streams on Deezer — because the platform's countermeasures are working. The problem is not that people are listening to AI music in large numbers. The problem is that fraudsters are uploading it at industrial scale to game the royalty system. The Deezer 75000 AI tracks per day streaming fraud revelation is the most comprehensive public data the industry has ever had on the intersection of AI-generated music and royalty manipulation.
The 650% Explosion — How We Got to 75,000 Per Day
The growth trajectory tells its own story. The numbers paint a stark picture. In January 2025, Deezer first disclosed that roughly 10,000 AI-generated tracks were hitting its platform daily, accounting for about 10% of uploads. By April 2025, that number had doubled to 20,000. By September 2025, it hit 30,000 daily — 28% of uploads. January 2026: 60,000 daily, 39% of uploads. And now, April 2026: 75,000 AI tracks per day, 44% of everything uploaded to the platform.
This acceleration is powered by two forces working in tandem. The first is the increasing accessibility and quality of AI music generation tools like Suno and Udio, which let anyone produce complete, listenable tracks with text prompts. Deezer's chief innovation officer Aurélien Hérault told Billboard that part of the growth reflects improved detection. "Our data got better," he said. But the other part is real. More people are using these tools to generate music at scale. The second force is the economic incentive. A single fraud operator can upload thousands of tracks and run them through bot networks. These produce millions of streams per day. The potential payout — drawn from the same royalty pool that funds real artists — is substantial.
Streaming fraud across Deezer's entire catalogue accounted for 8% of all streams in 2025. But on AI-generated content specifically, the fraud rate is 85%. The distinction matters. AI music is not inherently fraudulent. It is disproportionately used for fraud. The reason is simple: it can be produced at near-zero marginal cost and at volume. That makes it the perfect feedstock for the bot-streaming economy.
The $8 Million Conviction — Justice Department Lands First AI Streaming Fraud Prosecution
On March 19, 2026, the U.S. Department of Justice secured a guilty plea in what it described as the first criminal prosecution of AI-based streaming fraud in the United States. Michael Smith, a 54-year-old from Cornelius, North Carolina, admitted to conspiracy to commit wire fraud. Prosecutors showed he used artificial intelligence to generate hundreds of thousands of songs. He then used automated bot programs to stream them billions of times. The streams went to Amazon Music, Apple Music, Spotify, and YouTube Music.
The scheme ran from 2017 through 2024. At its peak, Smith was generating about 661,440 streams per day through a network of bot accounts — producing over $1.2 million in annual fraudulent royalties. In total, Smith siphoned more than $8 million from the royalty pool that was meant for genuine artists and songwriters. To avoid detection, he spread his automated streams across thousands of different tracks. He avoided concentrating on a single song. For years, streaming platforms' fraud-detection algorithms were not equipped to catch this tactic at scale.
"Michael Smith generated thousands of fake songs using artificial intelligence and then streamed those fake songs billions of times." U.S. Attorney Jay Clayton spoke in the DOJ's press release. "Although the songs and listeners were fake, the millions of dollars Smith stole was real. Millions of dollars in royalties that Smith diverted from real, deserving artists." Smith agreed to forfeit over $8 million. He faces a maximum sentence of five years. Sentencing is set for July 2026.
The case established a critical precedent. Using AI to manufacture music for the purpose of defrauding streaming royalty systems is now a federal crime. It also revealed how industrial-scale streaming fraud operates. The mechanics are now clear. Cloud-based bot accounts. Distributed stream patterns. AI as the raw material generator. And a royalty system that pays out before confirming whether anyone actually wanted to hear the music.
The Global Enforcement Picture — Brazil, IFPI, and Operation Authêntica
While the U.S. case targeted an individual operator, the global enforcement effort is increasingly structural. In Brazil, Operation Authêntica has now secured three court orders shutting down streaming fraud services. The task force launched in 2023, led by CyberGaeco and the Consumer Protection Prosecutor's Office of São Paulo. IFPI and Pro-Música Brasil support the effort.
The latest ruling, delivered in April 2026, permanently blocked the domain of Boom de Seguidores, a service that openly sold artificial plays on Spotify, SoundCloud, and YouTube Music. It also sold fake likes, followers, and comments on social media platforms. The São Paulo court confirmed that selling artificial engagement services constitutes misleading advertising and is illegal under Brazilian law. Two previous rulings had already shut down similar operations: Seguidores in July 2025 and Turbine Digital in October 2025.
"Under Operation Authêntica, the courts have consistently confirmed that services that enable streaming fraud mislead consumers and are unlawful," said Melissa Morgia, IFPI's global chief content protection officer. "This illicit business model commercialises fraud and, in the music context, ultimately diverts royalties from legitimate creators."
The significance of Brazil as a battleground is not incidental. Brazil is now the eighth-largest recorded music market globally, and its rapid digital growth has made it a prime target for fraud operators. The Brazilian approach treats the sale of fake streams as illegal commercial activity. That is a big shift from just a terms-of-service violation. It parallels similar enforcement actions in the US. Other jurisdictions are watching closely.
How Platforms Are Fighting Back — Detection, Demonetisation, and Deterrence
Deezer's approach combines three layers: detection, removal, and demonetisation. Its patent-pending AI detection tool identifies tracks generated by models including Suno and Udio, with the capability to add detection for any similar tool. Once detected, AI tracks are tagged explicitly (Deezer remains the only major streaming platform to label AI-generated music transparently), excluded from algorithmic recommendations and editorial playlists, and have their fraudulent streams demonetised. Deezer has also stopped storing hi-res versions of AI tracks and has made its detection technology available for licensing to other platforms and industry partners.
This follows the pattern seen in the first criminal streaming fraud conviction, where fake streams stole millions from real artists. This scale of synthetic uploads directly connects to the broader crisis of AI-generated music stealing artist royalties, where 44% of new tracks are synthetic and 85% of streams are bots — draining the royalty pool that independent artists depend on.
Apple Music, meanwhile, took a different but complementary approach. VP Oliver Schusser confirmed to The Hollywood Reporter that Apple Music demonetised up to 2 billion fraudulent streams in 2025, representing about $17 million in royalties that would otherwise have been taken from legitimate artists. Apple's fraud rate sits at less than 0.5% of total streams. That is a testament to its validation systems — which Schusser described as checking "every single play on Apple Music." In February 2026, Apple doubled the financial penalties for distributors whose catalogues generate fraudulent streams, raising fines from 5-25% of revenues to 10-50%.
Spotify, the market leader, removed 75 million AI-generated "spam" tracks in a single crackdown in September 2025 and has since implemented disclosure requirements for AI-generated music on the platform. Yet unlike Deezer, Spotify has not published detailed fraud-rate data, leaving the industry to rely on Deezer's figures as the most transparent benchmark. Bandcamp took the most drastic step: in January 2026, it banned AI-generated music entirely from its platform.
The platform-level crackdowns on streaming fraud echo the broader legal scrutiny now unfolding — from the Texas payola investigation into streaming platforms to criminal convictions for AI-powered royalty theft, regulators are no longer treating manipulation as a terms-of-service issue.
The challenge is clear. The economic incentive to commit fraud remains powerful. A CISAC and PMP Strategy study projects that nearly 25% of music creators' revenues are at risk by 2028, potentially amounting to €4 billion. When the fraud pays better than the music, the fraud will keep coming. The Deezer 75000 AI tracks per day streaming fraud data makes one thing clear: the scale of the problem far exceeds what any single platform can solve alone.
Why This Matters for Artists — and What Comes Next
To understand why streaming fraud is an existential threat to working musicians, you have to understand how the money flows. Streaming services collect subscription fees and advertising revenue into a single pool each month. That pool is then divided proportionally: if your music accounts for 0.01% of total streams, you receive 0.01% of the pool (minus the platform's cut).
This is a zero-sum game. Every stream generated by a bot on a fraudulent AI track takes a share of the pool. That share would otherwise have gone to a real artist with real listeners. The 2 billion fraudulent streams Apple Music demonetised in 2025 represent streams that, had they gone undetected, would have extracted an estimated $17 million — money that belongs to creators, not criminals. Deezer's 85% fraud rate on AI music tells a clear story. For every 100 streams on an AI-generated track, 85 are not human. And while Deezer demonetises those streams, the sheer volume of uploads — 75,000 tracks every day — tells you the effort is relentless.
For independent artists who earn their living from streaming, this is not an abstract concern. As the Texas payola investigation has shown, manipulation of streaming platforms takes many forms. When even 1% of total streams are fraudulent — and the actual figure may be higher on platforms without Deezer-level detection — that is 1% less income for every legitimate artist. For an artist earning $30,000 a year from streaming, that's $300 gone. Scale it up across hundreds of thousands of artists, and the collective loss is staggering.
At TipTop.music, we take a different view on AI music from most platforms, and a sharply different view on fraud. We believe AI-generated music deserves the same respect as any other creation method — what matters is the creative intent behind it, not which tools were used. AI musicians, human musicians, sample-based producers, and hybrid creators all put work into composition, arrangement, and production. The real problem is not how music is made; it is whether the people playing it are real. Fraud — bots, fake accounts, automated streaming farms — is theft regardless of what kind of music it targets. That is why TipTop's model makes fraud economically impossible: Every single play costs 1 cent. Every tip goes directly to the artist. There is no way to artificially inflate play counts without paying real money into the system. No bots can tip; no fake account can generate revenue without spending real credits.
The streaming fraud crisis is now at an inflection point. The industry has moved from denial to acknowledgment. It has moved from acknowledgment to action. The stakes could not be higher. But the scale of the problem demands structural change, not just better detection.
Several developments are worth watching. First, detection technology as systems: Deezer's decision to license its AI detection tool could create an industry-standard detection layer. This is similar to how Content ID became the standard for copyright management on YouTube. Widespread adoption would make it harder for fraud operators to hop between platforms with weaker detection. Second, legal enforcement as deterrent: the Michael Smith conviction and Brazil's Operation Authêntica rulings establish that streaming fraud carries real legal consequences — not just account termination. Third, penalty structures that change behaviour: Apple Music's doubled fines for distributors mean the companies that act as the pipeline for fraudulent uploads now have a financial incentive to police their catalogues.
But the most powerful solution may also be the simplest: a royalty model where fraud cannot pay. When every play has a verifiable cost attached — when there is no way to generate millions of streams without spending money — the economics of fraud collapse. This is not a theoretical idea. It is the model TipTop.music runs on today: 67% of every tip goes directly to the artist, plays are verified by actual payment, and bot networks have no way to participate. The streaming fraud crisis proves that the current pro-rata royalty model is vulnerable. All the subscription money goes into one pot. It is divided by total plays. Manipulation at scale becomes possible. A per-play cost changes the arithmetic entirely.
Deezer's data on the Deezer 75000 AI tracks per day streaming fraud crisis has given the industry the clearest picture yet of what it is up against. The numbers: 75,000 AI tracks per day, 44% of all uploads, 85% fraud rate, $8 million stolen in a single conviction, and a projected €4 billion at risk by 2028. The question now is whether the industry will settle for better detection — or build a system where fraud was never worth attempting in the first place.
Support Real Music — Every Play Is a Tip
Streaming fraud hurts every artist who earns a living from real listeners. At TipTop.music, every play costs 1 cent and 67% goes directly to the artist — no bot can manipulate the system because every play is a verified tip. Want to be part of a platform where artists get paid for every real listener?
Frequently asked questions
How many AI-generated tracks does Deezer receive daily?
Deezer now receives nearly 75,000 fully AI-generated tracks per day as of April 2026, representing 44% of all daily uploads to the platform. That amounts to more than 2 million AI-generated tracks uploaded per month.
What percentage of AI track streams are fraudulent?
According to Deezer data published in Music Week and the Deezer Newsroom, up to 85% of streams detected on AI-generated tracks are fraudulent — meaning they are played by bots rather than real human listeners. These fraudulent streams are demonetised by Deezer before royalties are paid.
What was the Michael Smith streaming fraud case?
Michael Smith, a 54-year-old from North Carolina, pleaded guilty in March 2026 to conspiracy to commit wire fraud for operating a scheme that used AI to generate hundreds of thousands of songs and automated bots to stream them billions of times across multiple platforms. The scheme netted over $8 million in fraudulent royalties over several years.
How is the music industry fighting streaming fraud?
Deezer has made its patent-pending AI detection tool available for licensing, Apple Music demonetised 2 billion fraudulent streams in 2025 and doubled penalties for distributors whose tracks are caught, and IFPI Operation Authêntica in Brazil has secured three court orders shutting down streaming fraud services.
How does streaming fraud hurt real artists?
Streaming services pay royalties from a fixed pool of subscription and advertising revenue, distributed proportionally by total streams. When fraudsters use bots to inflate play counts on AI-generated tracks, they divert money from that pool — meaning less income for legitimate artists whose music was streamed by real listeners.