Zubnet AIAprenderWiki › Differential Privacy
Safety

Differential Privacy

DP
Um framework matemático que garante privacidade individual em análise de dados agregados e treinamento de modelos. Com privacidade diferencial, adicionar ou remover dados de qualquer indivíduo muda a saída no máximo por uma pequena quantidade limitada. Isso significa que você pode aprender padrões úteis de um dataset sem revelar informação sobre qualquer pessoa específica nele.

Por que importa

Enquanto IA treina em dados cada vez mais pessoais (registros de saúde, transações financeiras, mensagens), privacidade diferencial provê a garantia mais forte conhecida de que dados individuais não podem ser extraídos do modelo. É usada pela Apple (previsões de teclado), Google (analytics de uso do Chrome) e o Bureau do Censo dos EUA. Para IA, aborda a preocupação de que LLMs podem memorizar e reproduzir dados de treinamento privados.

Deep Dive

The formal guarantee: a mechanism M is ε-differentially private if for any two datasets D and D' that differ in one record, and any output S: P[M(D) ∈ S] ≤ e^ε · P[M(D') ∈ S]. Intuitively: the output looks essentially the same whether or not any specific individual's data is included. The privacy parameter ε controls the privacy-utility trade-off — smaller ε means stronger privacy but noisier (less useful) outputs.

DP in ML Training

DP-SGD (Differentially Private Stochastic Gradient Descent) adds calibrated noise to gradients during training, ensuring the trained model doesn't memorize individual examples. The trade-off: noise reduces model accuracy. For large models and datasets, the accuracy impact can be small. For small datasets, DP can significantly hurt performance. The practical challenge is choosing ε — too small and the model is useless, too large and privacy guarantees are meaningless.

The Memorization Problem

LLMs can memorize and reproduce training data verbatim — phone numbers, email addresses, proprietary code. This is a privacy violation even without intentional data extraction. Differential privacy during pre-training would prevent this memorization, but applying DP to models trained on trillions of tokens is computationally challenging and can degrade quality. Current practice uses a combination of: training data deduplication, output filtering, and careful data sourcing rather than formal DP guarantees. As regulation tightens, the pressure to adopt formal privacy guarantees will increase.

Conceitos relacionados

← Todos os termos
← Developer Ferramentas Diffusion Model →