Apple quietly threatened to remove xAI's Grok app from its App Store in January over rampant nonconsensual sexual deepfakes, according to a letter obtained by NBC News. The tech giant told senators it "contacted the teams behind both X and Grok" after receiving complaints about users easily generating sexualized images of real people, including apparent minors. Apple determined X had "substantially resolved its violations" but warned Grok developers that "additional changes to remedy the violation would be required, or the app could be removed." Only after extended back-and-forth did Apple approve Grok's continued presence.

This behind-the-scenes drama exposes the fundamental weakness in app store content moderation. Apple โ€” which profits from having these apps in its walled garden โ€” chose private pressure over public accountability, allowing both apps to remain live throughout the entire enforcement process. The company that regularly flexes its App Store authority with an "iron fist" suddenly went soft when facing Musk's properties. Meanwhile, the "fixes" rolled out in real time were largely performative: limiting Grok to paying subscribers and adding easily circumvented photo blocking features.

What's most telling is Apple's silence. Despite clear, flagrant violations of its own guidelines โ€” the kind that typically result in swift app removals โ€” Apple handled this with kid gloves. The drawn-out private negotiations help explain why Grok's moderation changes appeared so haphazard and ineffective. Google Play, which also profits from hosting these apps, remained equally silent on the matter.

For developers building AI tools, this sets a dangerous precedent: if you're big enough and connected enough, App Store enforcement becomes negotiable rather than absolute. The message is clear โ€” platform rules apply differently depending on who you are and how much revenue you generate.