Deezer published numbers this week that let you see what the AI music deployment curve actually looks like at a major streaming platform, and the shape is not subtle. 75,000 AI-generated tracks land on the platform each day, up from 60,000 in January. That is 44% of all new uploads, up from 39%. Actual listener consumption of that flood is small: 1 to 3% of total streams. But what is being consumed is mostly fraud.
The detection side has become production infrastructure. Deezer's numbers say 85% of streams to AI-generated tracks are flagged as fraudulent and demonetized, which means organized stream-farm operations are using synthetic music as the vehicle for royalty fraud at scale. The platform now auto-removes 100%-AI tracks from algorithmic recommendations and editorial playlists. Their detection classifier has been productized: Deezer has been commercially licensing the AI-detection tool since January 2025 and expanded the rollout in March 2026 via the Deezer for Business unit. CEO Alexis Lanternier: "AI-generated music is now far from a marginal phenomenon and as daily deliveries keep increasing, we hope the whole music ecosystem will join us in taking action."
The asymmetry is the interesting part. Generation is cheap (75K/day is roughly one track per second of uploads), listening is tiny (1-3% of consumption), and fraud takes 85% of the tiny slice anyway. That means the attacker economics assume stream-farm bots will do the listening, and the content is optimized to pass fraud detection rather than to earn human ears. Deezer's response is two-layer — filter the content off algorithmic surfaces so it cannot get accidental human listens, and license the detection classifier to other platforms that do not want to build their own. The second half is the business model. When every streaming service has the same problem, detection becomes an inter-platform utility.
If you are building any content platform that accepts user uploads and pays for engagement, the Deezer numbers are the template to plan against. The baseline assumption should be that a plurality of new content will be AI-generated, that the economic pressure on your detection system is going to come from organized fraud rather than individual creators, and that the fraud-detection problem is separable from the "is this AI?" problem and probably more important. Deezer's decision to productize their classifier is a signal worth reading. AI-detection at streaming scale is a real standalone product category now, and if you are not Deezer, you may end up their customer rather than building your own.
