Google's Lyria 3 Pro just jumped from generating 30-second audio clips to full three-minute songs, putting it in direct competition with Suno and Udio. The new model can create structured tracks with intros, choruses, and bridges based on text prompts, reference photos, or videos. More importantly, Google is embedding Lyria across its entire ecosystem — from Gemini chat to Vertex AI for enterprise customers to Google Vids for office workers.

This isn't just about longer songs. It's Google's play to own AI-generated content creation from consumer to enterprise. While Suno and Udio built standalone music generation tools, Google is making AI music creation a feature across every product they touch. The Vertex AI integration means enterprises can now build music generation directly into their applications without dealing with third-party APIs. That's the real competitive moat here.

Google's attempting to address the obvious copyright concerns with SynthID watermarking and content checking against existing material, claiming Lyria "takes [artist names] as broad inspiration" rather than mimicking. That's corporate speak for "we're definitely training on copyrighted material but trying not to reproduce it exactly." The company also says it doesn't mimic specific artists, but anyone who's used these tools knows you can get pretty close to recognizable styles with the right prompts.

For developers, the Vertex AI integration is significant — you can now add music generation to applications without managing separate API relationships. But the three-minute limit still makes this more of a demo tool than production music software. Real musicians need longer compositions, better control, and higher quality audio. Google's playing catch-up on features while banking on distribution advantages.