Kintsugi, a California startup that spent seven years developing AI to detect depression and anxiety from speech patterns, is shutting down after failing to secure FDA clearance through the De Novo pathway. The company analyzed how people spoke rather than what they said, claiming its models could detect subtle mental health indicators that humans might miss. CEO Grace Chang said much of their time was spent "teaching the regulator about AI" — a telling admission about how unprepared the FDA remains for AI medical devices.

This failure highlights a fundamental mismatch between AI development timelines and medical device regulation. The De Novo pathway was designed for novel, low-risk devices, but its framework assumes traditional hardware, not machine learning models that can shift with new training data. While Kintsugi's peer-reviewed research showed results comparable to standard depression screening tools like the PHQ-9, "comparable" isn't enough when you're asking clinicians to replace established methods with black-box algorithms.

The regulatory gap extends beyond depression detection. Clinical evidence requirements for AI diagnostic tools demand "robust clinical or performance evaluations using representative data," but over 90% of AI medical models never reach clinical practice. Meanwhile, unregulated AI chatbots are already causing documented psychological harm — the Guardian reported cases of AI-induced delusions leading to hospitalization and suicide attempts. Basic mental health screening tools like the PHQ-9 are administered daily in clinics worldwide, even in settings "with no electricity and limited staff."

For AI builders, Kintsugi's open-sourcing of their technology offers a rare look under the hood of production mental health AI. But the broader lesson is clear: if you're building medical AI, budget for years of regulatory education and clinical validation, not just model performance. The FDA isn't just slow — it's fundamentally unprepared for AI.