Zubnet AIAprenderWiki › Differential Privacy
Safety

Differential Privacy

DP
Un framework matemático que garantiza la privacidad individual en el análisis de datos agregados y el entrenamiento de modelos. Con privacidad diferencial, agregar o quitar los datos de un solo individuo cambia la salida como máximo por una pequeña cantidad acotada. Esto significa que puedes aprender patrones útiles de un dataset sin revelar información sobre ninguna persona específica en él.

Por qué importa

Mientras la IA se entrena en datos cada vez más personales (registros de salud, transacciones financieras, mensajes), la privacidad diferencial provee la garantía más fuerte conocida de que los datos individuales no pueden ser extraídos del modelo. Es usada por Apple (predicciones de teclado), Google (analytics de uso de Chrome) y el Buró del Censo de EE. UU. Para la IA, aborda la preocupación de que los LLMs puedan memorizar y reproducir datos de entrenamiento privados.

Deep Dive

The formal guarantee: a mechanism M is ε-differentially private if for any two datasets D and D' that differ in one record, and any output S: P[M(D) ∈ S] ≤ e^ε · P[M(D') ∈ S]. Intuitively: the output looks essentially the same whether or not any specific individual's data is included. The privacy parameter ε controls the privacy-utility trade-off — smaller ε means stronger privacy but noisier (less useful) outputs.

DP in ML Training

DP-SGD (Differentially Private Stochastic Gradient Descent) adds calibrated noise to gradients during training, ensuring the trained model doesn't memorize individual examples. The trade-off: noise reduces model accuracy. For large models and datasets, the accuracy impact can be small. For small datasets, DP can significantly hurt performance. The practical challenge is choosing ε — too small and the model is useless, too large and privacy guarantees are meaningless.

The Memorization Problem

LLMs can memorize and reproduce training data verbatim — phone numbers, email addresses, proprietary code. This is a privacy violation even without intentional data extraction. Differential privacy during pre-training would prevent this memorization, but applying DP to models trained on trillions of tokens is computationally challenging and can degrade quality. Current practice uses a combination of: training data deduplication, output filtering, and careful data sourcing rather than formal DP guarantees. As regulation tightens, the pressure to adopt formal privacy guarantees will increase.

Conceptos relacionados

← Todos los términos
← Developer Herramientas Diffusion Model →