Several California patients filed a class-action lawsuit against Sutter Health and MemorialCare this week, alleging the health systems used Abridge AI's transcription tool to record medical conversations without proper consent. The complaint, filed in San Francisco federal court, claims patients weren't clearly informed their conversations would be "recorded by an artificial intelligence platform, transmitted outside the clinical setting, or processed through third-party systems." Abridge, valued at $5.3 billion as of June 2025, has been rapidly deployed across major health systems including Kaiser Permanente and Mayo Clinic.
This lawsuit exposes a critical gap in how healthcare AI is being deployed at scale. While AI scribes promise to reduce physician burnout by automating clinical documentation â and nearly a third of practices now use them â the consent process appears to be lagging behind the technology rollout. The fact that Abridge processes sensitive medical data "outside the clinical setting" raises questions about data sovereignty that go beyond simple privacy notices.
What's particularly telling is the contrast between institutional adoption and patient awareness. One physician at Cleveland Clinic described how AI scribing lets him focus on "old-fashioned medicine" without typing, and the technology clearly delivers value for overworked doctors. But if patients don't understand their conversations are being processed by third-party AI systems, potentially stored on external servers, that's a fundamental consent problem that no amount of clinical efficiency can justify.
For healthcare AI builders, this case is a wake-up call: technical capability doesn't excuse poor consent practices. If your AI processes protected health information, especially through external APIs, you need crystal-clear disclosure about data flow, storage, and processing. The alternative is expensive litigation and damaged trust in medical AI." "tags": ["healthcare", "consent", "medical-ai", "privacy
