The Midas Project's Model Republic published an investigation today documenting that The Wire by Acutus, a news-style website launched December 29, 2025, runs on AI-generated content masquerading as conventional journalism, and is now also caught using AI agents to email real human policy experts in the persona of fake reporters. Their analysis of the site using the Pangram AI detector found that 97% of articles are either fully or partially AI-generated. More damaging: Acutus's publicly accessible source code revealed prompt fields including "background information for the AI to use when generating questions" and "suggested questions for the AI interviewer to ask," exposing the operation's architecture as an automated interview-and-publish pipeline rather than a newsroom. The concrete victim is Nathan Calvin, vice president of the AI advocacy group Encode, who received an email from "reporter@acutuswire.com" signed by a "Michael Chen" inviting him to answer a written Q&A about an AI bill. Web searches found no record of any reporter named Michael Chen at any publication. Calvin became suspicious and reported the email; The Midas Project picked up the trail.
The technical reality of how the operation was caught is itself the most useful detail for builders. Two failure modes intersect. First, the Acutus source code was deployed without removing the agent prompt configuration UI, which meant anyone inspecting the page could see the literal field labels for "background for AI" and "suggested questions for AI interviewer," confirming the editorial workflow runs through a model rather than a human. Second, the published article output is high enough volume and uniform enough in style that an AI text detector (Pangram) flagged 97% of it; while AI text detection has well-known false-positive issues, the agreement between the source-code evidence and the detector output makes the conclusion difficult to escape. The agent persona "Michael Chen" was generated with enough realism to fool an initial recipient but not enough due diligence to survive a basic byline search, which is the failure mode any agent-generated outreach inherits when the operators do not maintain a long-tail web presence for the fake personas. None of this is technically sophisticated; what matters is the demonstration that an end-to-end AI-driven astroturf newsroom can be assembled cheaply enough that someone is doing it, and that the operational security required to keep it hidden is more work than the operators put in.
The broader implication is the political-funding chain Model Republic traced, which is why this story matters beyond any one fake journalist. Acutus content has been promoted on social media by Patrick Hynes, president of the PR firm Novus Public Affairs. Novus works with Targeted Victory, whose CEO co-founded the Leading the Future super PAC. Leading the Future is the $125M-plus pro-AI super PAC network targeting the 2026 midterms, opposing state-level AI regulation in favour of a national-only framework, co-founded with Andreessen Horowitz; OpenAI president Greg Brockman and his wife donated $50M to it. The chain from Acutus's fake-journalist outreach to a super PAC backed by an OpenAI principal is two-handshakes, not direct, and Model Republic is careful to frame the connection as suggestive rather than proven. But it is the first documented case of an AI-generated fake newsroom soliciting comment from real AI-policy advocates while being promoted by people in the PR orbit of the largest pro-AI political spending vehicle, which is a structurally important data point regardless of whether the formal accountability chain holds up. OpenAI did not respond to Futurism's request for comment by publication.
For builders, the actionable read is in two parts. First, on the threat-modelling side: agentic outreach in your inbox is now plausible, and the detection signal is the long tail. If a reporter byline cannot be cross-referenced to any prior published work, if the publication has launched in the last six months, if the email signature matches a generic template, and if the question list reads like an LLM prompt expansion of a single topic, those are now real flags. Encode's Calvin caught it on the third signal. Second, on the building side: the source-code exposure that gave Acutus away is exactly the kind of operational mistake that makes early astroturf-AI detectable. The next iteration of these operations will not leave prompt UIs in production builds, will mint long-tail histories for fake personas before deploying them, and will introduce stylistic noise to defeat detectors. Independent of how this specific case resolves, the labour cost of running a fake AI newsroom is now low enough that the supply will only grow, and the load-bearing question is whether platforms (Substack, X, search) and email providers will treat this as a moderation problem worth solving. Right now the answer is no. The Calvin example is one detected outreach; the undetected count is unknown.
