Medvi, the AI-powered telehealth company caught using deepfaked patient testimonials and fake doctor profiles, issued what it calls a "response to external speculation" after the New York Times got roasted for publishing a fawning profile that ignored the company's documented fraud. The statement follows widespread criticism of both Medvi's practices and the NYT's failure to mention the FDA warning issued to Medvi in February 2026, or the company's use of AI-generated before-and-after photos and fake medical practitioners.

This isn't just another AI ethics story — it's a perfect example of how AI tools are being weaponized for healthcare fraud at scale. As I covered back in May, Medvi operates as a marketing wrapper for GLP-1 prescriptions, using AI to generate fake patient stories, doctor profiles, and even mangled pharmaceutical logos. The company claims to be on track for $2 billion in revenue with just two employees, but that "efficiency" comes from automating deception rather than innovation.

The broader tech and health communities immediately called out the NYT piece for missing obvious red flags that were hiding in plain sight on social media. Multiple outlets including Business Insider and Techdirt piled on with additional reporting about Medvi's fake doctor accounts and misleading drug marketing. The FDA warning specifically cited Medvi for featuring images of branded pill bottles for drugs they don't actually manufacture and claiming FDA approval for compounds that aren't approved.

For developers building AI tools: this is your canary in the coal mine. When your technology makes it trivially easy to generate fake medical credentials and patient testimonials at scale, you're not disrupting healthcare — you're enabling fraud. The fact that a major newspaper fell for this should tell you everything about how convincing AI-generated deception has become.