A class action lawsuit filed Tuesday reveals that Perplexity's AI search engine shares complete chat transcripts with Google and Meta through embedded ad trackers, even for users who explicitly choose "Incognito Mode." The plaintiff, identified only as John Doe, discovered that his conversations about family finances, tax management, and investment decisions were transmitted alongside personally identifiable information including email addresses. According to the complaint, this data sharing affects all users regardless of subscription status, with non-subscribers particularly vulnerable as their initial prompts generate URLs that allow third parties direct access to entire conversations.

This lawsuit exposes a fundamental trust problem in AI consumer products. While companies like OpenAI and Anthropic have faced scrutiny over training data practices, Perplexity's alleged behavior represents something more invasive — real-time surveillance of user queries for advertising purposes. The case highlights how AI companies are monetizing user interactions in ways that traditional search engines never could, given the conversational and deeply personal nature of AI chat sessions. Users naturally share more sensitive information with AI assistants than they would type into Google search.

What makes this particularly damning is the alleged deception around "Incognito Mode." If the lawsuit's technical findings hold up, Perplexity marketed a privacy feature that provided zero actual privacy protection. The complaint alleges that even paid subscribers using this mode had their conversations and identifiers shared with Meta and Google. This isn't just poor privacy practices — it's potentially fraudulent marketing of a security feature that doesn't work.

For developers building AI applications, this case should be a wake-up call about third-party integrations and analytics. Every tracking pixel, every analytics script, every A/B testing tool you embed potentially creates liability. If you're building AI products, audit your data flows now. Users trust AI assistants with deeply personal information — violating that trust isn't just unethical, it's apparently also legally actionable.