Superhuman CEO Shishir Mehrotra doubled down on his company's decision to create AI clones of professional writers without permission, telling The Verge's Nilay Patel that using writers' names was "attribution," not impersonation. The company's "Expert Review" feature, launched last August, let users get writing feedback from AI versions of journalists like Patel and Julia Angwin — complete with checkmarks suggesting official endorsement. Superhuman killed the feature in March after explosive backlash and a class-action lawsuit led by Angwin.
This controversy cuts to the heart of AI's thorniest problem: consent. Mehrotra's distinction between "attribution" and "impersonation" is corporate doublespeak that misses the point entirely. When you create commercial AI products using someone's name and professional reputation without permission, you're not attributing — you're exploiting. The fact that it took months for anyone to discover this "buried" feature makes it worse, not better, suggesting Superhuman knew it was problematic.
What's most telling is Mehrotra's defensive pivot when Patel pressed him on compensation. The CEO insisted the lawsuit claims are "without merit" while simultaneously apologizing for "under-delivering" to experts. You can't have it both ways. Either you wronged these writers or you didn't — and if you're apologizing, you clearly know which it is.
For developers building AI products, this case establishes a clear line: using real people's names and professional identities for commercial AI features without explicit consent isn't just ethically questionable — it's legally risky. The safest approach is obvious: ask first, or build anonymous systems that don't trade on individual reputations.
