Anthropic launched Claude Code with desktop control capabilities, letting the AI click, scroll, and navigate your screen to complete tasks on macOS. Available now for Claude Pro and Max subscribers as a "research preview," the feature can open files, browse the web, and run development tools when API connections aren't available. Users can also trigger these actions remotely through Claude's Dispatch tool, provided the target machine stays powered on.

This puts Anthropic in direct competition with recent releases from Perplexity, Manus, and Nvidia offering similar computer control features. But Anthropic's approach stands out for its brutal honesty about the risks. While competitors focus on capabilities, Anthropic explicitly warns that desktop control "takes much longer and is more error-prone" than API-based alternatives and "won't always work perfectly."

What's particularly concerning is Anthropic's admission that their safety training "isn't perfect" and "isn't absolute" — meaning Claude might occasionally bypass restrictions on financial transactions, file modifications, or handling sensitive data. The company warns Claude will see everything on your screen, including personal information, and recommends avoiding sensitive work during this preview phase. They've blocked access to trading platforms and cryptocurrency apps by default, but acknowledge these safeguards aren't foolproof.

For developers, this represents a classic early-adopter dilemma: powerful automation capabilities versus significant security risks. The fact that Anthropic — usually cautious about AI safety — is shipping this with such explicit warnings suggests either market pressure to compete with other desktop AI agents, or confidence that developers will use it responsibly despite the acknowledged flaws. Either way, treat this as beta software with access to everything on your machine.