Zubnet AIसीखेंWiki › Differential Privacy
Safety

Differential Privacy

DP
Aggregate data analysis और model training में individual privacy guarantee करने वाला एक mathematical framework। Differential privacy के साथ, किसी भी single individual का data add या remove करना output को सबसे ज़्यादा एक small, bounded amount से change करता है। इसका मतलब आप एक dataset से useful patterns सीख सकते हैं बिना उसमें किसी specific person के बारे में information reveal किए।

यह क्यों matter करता है

जब AI increasingly personal data (health records, financial transactions, messages) पर train होती है, differential privacy सबसे strongest known guarantee provide करती है कि individual data को model से extract नहीं किया जा सकता। ये Apple (keyboard predictions), Google (Chrome usage analytics), और US Census Bureau द्वारा used है। AI के लिए, ये इस concern को address करती है कि LLMs private training data memorize और reproduce कर सकते हैं।

Deep Dive

The formal guarantee: a mechanism M is ε-differentially private if for any two datasets D and D' that differ in one record, and any output S: P[M(D) ∈ S] ≤ e^ε · P[M(D') ∈ S]. Intuitively: the output looks essentially the same whether or not any specific individual's data is included. The privacy parameter ε controls the privacy-utility trade-off — smaller ε means stronger privacy but noisier (less useful) outputs.

DP in ML Training

DP-SGD (Differentially Private Stochastic Gradient Descent) adds calibrated noise to gradients during training, ensuring the trained model doesn't memorize individual examples. The trade-off: noise reduces model accuracy. For large models and datasets, the accuracy impact can be small. For small datasets, DP can significantly hurt performance. The practical challenge is choosing ε — too small and the model is useless, too large and privacy guarantees are meaningless.

The Memorization Problem

LLMs can memorize and reproduce training data verbatim — phone numbers, email addresses, proprietary code. This is a privacy violation even without intentional data extraction. Differential privacy during pre-training would prevent this memorization, but applying DP to models trained on trillions of tokens is computationally challenging and can degrade quality. Current practice uses a combination of: training data deduplication, output filtering, and careful data sourcing rather than formal DP guarantees. As regulation tightens, the pressure to adopt formal privacy guarantees will increase.

संबंधित अवधारणाएँ

← सभी Terms
← Developer Tools Diffusion Model →