Zubnet AILearnWiki › Differential Privacy
Safety

Differential Privacy

DP
A mathematical framework that guarantees individual privacy in aggregate data analysis and model training. With differential privacy, adding or removing any single individual's data changes the output by at most a small, bounded amount. This means you can learn useful patterns from a dataset without revealing information about any specific person in it.

Why it matters

As AI trains on increasingly personal data (health records, financial transactions, messages), differential privacy provides the strongest known guarantee that individual data can't be extracted from the model. It's used by Apple (keyboard predictions), Google (Chrome usage analytics), and the US Census Bureau. For AI, it addresses the concern that LLMs might memorize and reproduce private training data.

Deep Dive

The formal guarantee: a mechanism M is ε-differentially private if for any two datasets D and D' that differ in one record, and any output S: P[M(D) ∈ S] ≤ e^ε · P[M(D') ∈ S]. Intuitively: the output looks essentially the same whether or not any specific individual's data is included. The privacy parameter ε controls the privacy-utility trade-off — smaller ε means stronger privacy but noisier (less useful) outputs.

DP in ML Training

DP-SGD (Differentially Private Stochastic Gradient Descent) adds calibrated noise to gradients during training, ensuring the trained model doesn't memorize individual examples. The trade-off: noise reduces model accuracy. For large models and datasets, the accuracy impact can be small. For small datasets, DP can significantly hurt performance. The practical challenge is choosing ε — too small and the model is useless, too large and privacy guarantees are meaningless.

The Memorization Problem

LLMs can memorize and reproduce training data verbatim — phone numbers, email addresses, proprietary code. This is a privacy violation even without intentional data extraction. Differential privacy during pre-training would prevent this memorization, but applying DP to models trained on trillions of tokens is computationally challenging and can degrade quality. Current practice uses a combination of: training data deduplication, output filtering, and careful data sourcing rather than formal DP guarantees. As regulation tightens, the pressure to adopt formal privacy guarantees will increase.

Related Concepts

← All Terms
← Developer Tools Diffusion Model →