The New York Times ran a glowing profile Thursday of Medvi, an AI-powered pharmaceutical startup supposedly on track for $1.8 billion in sales with just two employees. The piece, which featured praise from OpenAI's Sam Altman, positioned founder Matthew Gallagher as a visionary who built a GLP-1 prescription empire using artificial intelligence. But the Times missed a crucial detail: Futurism exposed Medvi as a fraud factory eight months ago.

Futurism's investigation revealed Medvi uses AI-generated fake doctor profiles, stolen before-and-after photos with AI-swapped faces, and misleading testimonials to sell weight-loss drugs. One "success story" featured a Reddit user's 2017 weight loss journey—achieved by quitting alcohol, not GLP-1s—with an AI-altered face. A doctor Medvi claimed as a partner demanded his removal from their site, saying he had no involvement. The FDA has issued warnings about the company's practices.

This isn't just sloppy journalism—it's a dangerous pattern. Major outlets increasingly treat AI-powered businesses as inherently innovative without examining their actual practices. When the Times legitimizes companies using AI to manufacture fake medical credentials and patient testimonials, it normalizes fraud as disruption. The real story isn't about AI efficiency; it's about how generative AI enables sophisticated deception at scale in regulated industries like healthcare.

For developers building AI tools: this is exactly why responsible AI practices matter. Every deepfake detector, every content authentication system, every ethical guardrail you build helps prevent AI from becoming a fraud multiplier. The technology that can automate legitimate businesses can just as easily automate scams—and apparently fool business journalists in the process.