Reuters broke an exclusive this week that Meta is rolling out the Model Capability Initiative (MCI) to US employees. The tool captures mouse movements, clicks, keystrokes, and periodic screenshots on work-related apps and websites. The disclosure landed in a memo to a Meta Superintelligence Labs team channel. Meta's framing: help their models learn basic computer-use behaviors, from navigating dropdown menus to using keyboard shortcuts, so future agents can perform white-collar tasks autonomously.

The pattern echoes something already covered in these pages. Earlier this week the MIT Technology Review reported on Colleague Skill, a GitHub tool by a Shanghai AI Lab engineer that scrapes Lark and DingTalk chat histories to distill coworkers into agent-replayable workflow manuals. That story was bottom-up, with individual engineers distilling their peers, plus a backlash tool (anti-distillation) from another engineer that rewrites workflow documents into generic non-actionable language before they get absorbed. MCI is the top-down corporate version of the same data-capture problem. Same substrate (workplace computer activity), different governance (officially sanctioned, centrally administered). Meta spokesperson Stone told Reuters the data from MCI will not be used for performance assessments and will not be used for any purpose other than model training.

Two things worth registering. First, the training-data scarcity signal. Meta competing with OpenAI and Anthropic on shipping autonomous agent products means competing on interactive computer-use training data, and that data is not on the public internet. The honest answer for how to get it is "our employees, doing their jobs, instrumented." Every frontier lab is doing some version of this; Meta's version is just unusually explicit, with a disclosed tool name and a memo. Second, the privacy-versus-capability trade being made here is not new, but the scale is. MCI-style instrumentation at Meta headcount produces a training set competitors without either the workforce or the internal legal posture to do the same will struggle to match. Whether the Stone-quoted guardrails hold (no performance review, no other purpose, screen-content safeguards for sensitive material) is the part that gets tested in practice, not in memos.

Two observations if you run an AI-forward organization. One, the MCI pattern is going to pressure every lab that does not have it. If you work at or compete with a company building agent products, expect the "should we instrument our own employees" conversation to be on the agenda within two quarters if it is not already. Two, the employee-side response matters. Koki Xu's anti-distillation tool in the Colleague Skill story demonstrated that workers treat these capture pipelines as adversarial input the moment they are implemented. If you ship MCI-style tooling internally, plan for your own workforce to build countermeasures. The capture pattern is real, the resistance pattern is real, and they will co-evolve.