TL;DR
Deezer said today that 44% of every song uploaded to the platform is now AI-generated, or about 75,000 tracks a day and more than 2 million a month. That share has almost quadrupled since January 2025, when the company first turned on its AI detection tool. Listener demand hasn’t followed: fully AI-generated tracks account for only 1–3% of total streams, and Deezer says 85% of those streams are fraudulent and already demonetized. The flood is real, but so far the platform is eating most of the cost.
Why this is today’s story
Every few months somebody restates the “AI music is coming” headline and attaches a vague number to it. Deezer’s update (TechCrunch, April 20, 2026) is different in two ways. First, the number isn’t a forecast — it’s the platform’s own measurement of tracks hitting its pipeline right now, derived from the AI-detection model Deezer built in-house. Second, the trend line is steep enough that projecting another year forward puts AI-generated uploads past the human ones on a single platform. That hasn’t happened in recorded music before.
The company has been reporting these figures quarterly since September 2025 (Deezer newsroom, September 2025). Stacked together, they map out how fast the shift is moving:
| Month | Daily AI uploads | Share of new uploads |
|---|---|---|
| January 2025 | ~10,000 | ~10% |
| September 2025 | ~30,000 | ~28% |
| November 2025 | ~50,000 | ~33% |
| January 2026 | ~60,000 | ~39% |
| April 2026 | ~75,000 | 44% |
The step from November to April is less than six months, and in that window AI-uploaded tracks climbed nine percentage points against a baseline of human-made music that hasn’t slowed down either. Something is generating a lot of audio.
Why the uploads are real, even if the demand isn’t
Running an AI music pipeline in 2026 is trivial. Suno and Udio will produce a minute of plausible guitar-and-vocals for a few cents of compute. A Python script wrapping either API, plus an aggregator account like DistroKid or TuneCore, turns that into a searchable catalog on every major streamer. The cost structure of uploading has collapsed; the cost structure of listening hasn’t.
That mismatch explains why Deezer’s consumption number is so small. Fully AI-generated tracks make up only 1–3% of streams, despite representing almost half the catalog growth. People aren’t choosing this music; it’s getting listened to because it ends up in long-tail playlists, ambient radio loops, and stream-farm rotations.
Deezer’s second number is about the farming. Of the streams that AI tracks do earn, the company says 85% are detected as fraudulent and demonetized. That ratio is specific to AI-generated content:
The bulk of AI-music playback Deezer sees is bots listening to bot-made music. The pipeline looks like this:
- Someone points Suno or Udio at a prompt loop.
- The output gets uploaded via an aggregator.
- Stream farms run bought listens against those tracks.
- Royalties get diverted from the per-stream pool that would otherwise go to human artists.
The economics only work while the fraud detection is imperfect. Deezer has been pushing that threshold down for almost two years now.
How Deezer knows
The detection side of this story is why the numbers exist at all. Deezer started tagging AI-generated tracks at the catalog level in June 2025, which made it the first major streaming platform to publish a metadata label on a per-track basis. The classifier it uses was trained on known AI-generation model outputs (Suno, Udio, and several open-weight models) and runs over every new upload before the track goes live.
Through 2025 Deezer tagged more than 13.4 million AI tracks. The company started licensing the detection tool to outside customers in January 2026 (Deezer newsroom, January 2026), and by the end of March the Hungarian collecting society EJI became the first rights-management organization to use it for royalty-distribution decisions. The tagging data is the basis for the demonetization of fraudulent AI streams: a track flagged as AI-generated AND hit by suspicious listening patterns gets routed off the royalty-share calculation.
There’s a catch in the detection part worth naming. An Ipsos survey Deezer ran in late 2025 found 97% of listeners couldn’t reliably distinguish fully AI-generated tracks from human-made ones. The classifier is doing all the defensive work here. If it ever starts producing false negatives at scale, the fraud pipeline becomes invisible.
What this does to the royalty pool
Streaming royalties are a fixed pie. Spotify, Apple Music, and Deezer all operate on a pro-rata model: the subscription revenue for a given month gets pooled, then divided based on each track’s share of total streams. Every stream dilutes the per-stream payout slightly. If a few million bot-listens against AI-generated tracks get monetized before detection catches them, those royalties come directly out of what human artists would have earned.
Deezer’s 85% fraud-detection number is what keeps that math under control on its platform. The unanswered question is what happens on platforms without a production detection model. Spotify has talked publicly about AI-content transparency as a goal but, as of today, has not published a directly comparable upload figure. Apple Music, Amazon Music, and YouTube Music have not published one either. If the Deezer trend is representative, and there’s no reason it wouldn’t be since the upload APIs are the same, then somewhere between a quarter and half of new uploads across every streaming service are now synthetic, and most platforms don’t publicly flag any of them.
What changes from here
A few things follow from Deezer’s numbers if they keep moving at this rate.
Per-song detection becomes a baseline platform feature rather than a differentiator. Deezer’s head start on the classifier is why it can report today’s number; competitors will need their own, and sooner rather than later, because regulators are already watching. The EU AI Act’s transparency obligations for general-purpose AI output have been in force since August 2025, with high-risk provisions scheduled for August 2026, and detection-then-labeling is the easiest compliance story a streamer can tell.
The business model for pure-AI music catalogs also breaks if the fraud route gets closed. Nobody uploads 75,000 tracks a day expecting to make money from organic listens; nobody has that budget for compute and aggregator fees otherwise. If platforms harden their fraud detection in parallel, the upload number should actually start to flatten. Watch the next quarterly Deezer update for that signal.
Human artists get loud about this next. The Recording Academy, major labels, and artist-advocacy groups have spent two years pushing for AI disclosure rules — the rights-vs-AI fight is the same one that killed OpenAI’s Sora–Disney partnership last month, only in audio rather than video. Deezer’s number gives them a concrete headline to campaign on: every major-label release has to compete, in the catalog, with three or four synthetic tracks per minute. Opinion columns will pick that up fast, and congressional hearings will follow.
FAQ
What is Deezer counting as “AI-generated”?
Tracks where the audio itself was produced by a generative model, typically Suno, Udio, or an open-weight music model. Deezer’s classifier works on the audio signal itself rather than relying on uploader metadata, so a human-written song with an AI-generated instrumental would still count. Instrument-level use of AI tools in a human-produced track generally does not.
Does the 44% number include human artists using AI tools for parts of a track?
No. Deezer is reporting “fully AI-generated” content, where the core of the audio came out of a generative model. Partial AI-assisted production (vocal cleanup, stem separation, mastering) isn’t in the number.
How does Deezer detect AI music?
A proprietary classifier trained on known AI-model outputs. Deezer hasn’t published the architecture, but the company said the model is updated as new generators launch. Detection runs on every uploaded track before it goes live on the platform.
Why is only 1–3% of streams coming from AI content if 44% of uploads are AI?
Because listeners aren’t choosing it. The uploads are dominated by bulk catalog-fill and fraud-pipeline tracks that would never get human discovery. Most of the plays AI tracks do get come from stream farms running bots against them, and Deezer flags 85% of those as fraudulent.
Are artists losing money because of this?
Deezer says its demonetization of fraudulent AI streams protects the royalty pool on its platform. On streamers without active detection, the answer is almost certainly yes, though nobody has published a number.
Sources
- TechCrunch — Deezer says 44% of songs uploaded to its platform daily are AI-generated — primary news coverage published today
- Music Business Worldwide — 75,000 AI-generated tracks now flood Deezer daily — industry reporting with additional context
- Deezer newsroom — 28% of all delivered music is fully AI-generated (Sept 2025) — prior quarterly report, used for the trend table
- Deezer newsroom — licensing AI detection tool externally (Jan 2026) — official announcement of the detection-tool licensing program
- Deezer + Ipsos survey — 97% can’t tell AI music from human (Nov 2025) — listener-perception study
- Engadget — AI-made songs make up 44 percent of daily uploads — independent corroboration
Bottom line
The 44% headline is going to stick around for a few days, and it should. Deezer is the first streamer with both the classifier and the willingness to publish real numbers, so this is the first time anyone has had to stare at how much of the catalog flood is synthetic. Look past it though, and the 85% fraud-demonetization figure is the one worth watching. It suggests the fraud side of the equation is what actually pays for all this upload volume, and that a more aggressive detection regime across the industry would collapse the economics of running a bulk AI-track factory almost overnight. Without equivalent disclosures from Spotify, Apple Music, Amazon Music, and YouTube Music, the rest of the catalog remains unmeasured.