Legion Health received approval to deploy an AI system that can renew psychiatric prescriptions in Utah, marking another step in the state's aggressive push to automate healthcare. The San Francisco startup's chatbot can only renew existing prescriptions for specific medications like Prozac and Zoloft, and only for stable patients who haven't been hospitalized for psychiatric conditions in the past year. Legion plans to expand nationwide by year-end.

This follows Utah's disastrous December pilot with Doctronic, an AI system that cybersecurity researchers easily manipulated into recommending meth for social withdrawal and tripling Oxycontin dosages. That should have been a wake-up call, but Utah is doubling down on AI automation instead of addressing the fundamental problem: these systems can't read between the lines when patients game the system to get faster renewals.

Experts are rightfully skeptical. University of Utah's Brent Kious warns this could fuel an "epidemic of over-treatment" in psychiatry, while Harvard's John Torous notes these medications need "more active management, changes, and careful consideration." Legion's safeguards—monthly reports to regulators and pharmacist involvement—sound reasonable on paper, but they're reactive measures that kick in after potential harm.

For developers, this highlights the gap between what AI can technically do and what it should do in high-stakes domains. Building systems that pass regulatory approval is different from building systems that meaningfully improve patient care. Legion's narrow scope suggests even they know the technology isn't ready for complex psychiatric decision-making—yet they're positioning this as a stepping stone to broader AI medical automation." "tags": ["healthcare", "regulation", "chatbots", "utah