Zubnet AILearnWiki › Federated Learning
Training

Federated Learning

FL, Collaborative Learning
A training approach where the model is trained across multiple devices or organizations without sharing the raw data. Instead of sending data to a central server, each participant trains a local copy of the model on their own data and sends only the model updates (gradients) to a central coordinator. The coordinator aggregates updates from all participants to improve the global model.

Why it matters

Federated learning enables AI training on data that can't be centralized due to privacy, regulation, or competitive concerns. Hospitals can collaboratively train a diagnostic model without sharing patient records. Companies can improve a shared model without exposing proprietary data. It's the most practical approach to privacy-preserving AI training at scale.

Deep Dive

The standard federated learning algorithm (FedAvg): (1) the server sends the current model to selected participants, (2) each participant trains the model on their local data for several steps, (3) participants send their updated model weights (not data) to the server, (4) the server averages the updates and creates a new global model, (5) repeat. The key property: raw data never leaves the participant's device.

Challenges

Non-IID data: participants often have very different data distributions (a hospital in Tokyo has different patient demographics than one in São Paulo). This makes training unstable — updates from different participants may conflict. Communication cost: sending model updates (potentially billions of parameters) over the network is expensive, especially for mobile devices. Free-riders: participants who receive the improved model but contribute low-quality updates. These challenges make federated learning harder than centralized training, though each has active solutions.

Real-World Use

Apple uses federated learning for keyboard prediction (learning from what you type without sending your texts to Apple). Google uses it for search suggestion improvement. Healthcare consortiums use it for multi-hospital model training. The technique is most valuable when: the data is truly sensitive (medical, financial), regulation prevents data sharing (GDPR, HIPAA), or the data is too large to centralize (billions of mobile device interactions).

Related Concepts

← All Terms
← Feature Feedforward Network →