Zubnet AIAprenderWiki › Transfer Aprendering
Training

Transfer Aprendering

Usar conocimiento aprendido de una tarea o dataset para mejorar rendimiento en una tarea diferente pero relacionada. En vez de entrenar desde cero cada vez, empiezas con un modelo que ya entiende patrones generales (estructura del lenguaje, features visuales) y lo adaptas a tu necesidad específica. Pre-training y luego fine-tuning es el paradigma dominante en IA moderna.

Por qué importa

El transfer learning es por qué la IA se volvió práctica. Entrenar un modelo de lenguaje desde cero cuesta millones de dólares. Fine-tunear un modelo pre-entrenado en tu tarea específica cuesta decenas de dólares y unas horas. Esta economía es lo que habilitó la explosión de aplicaciones IA — no necesitas el presupuesto de Google para construir algo útil.

Deep Dive

The key insight: low-level features transfer across tasks. A vision model trained on ImageNet learns to detect edges, textures, and shapes in its early layers — features useful for almost any visual task. A language model trained on web text learns grammar, facts, and reasoning patterns useful for almost any language task. Transfer learning exploits this by reusing the general knowledge and only training the task-specific parts.

The Pre-train + Fine-tune Paradigm

Almost every AI system today follows this pattern: (1) pre-train a large model on a massive, general dataset (expensive, done once), (2) fine-tune on a smaller, task-specific dataset (cheap, done many times). BERT pioneered this for NLP in 2018. GPT scaled it up. The entire LLM industry is built on this paradigm — foundation models are the pre-trained base, and fine-tuning (including RLHF/DPO) is how they become useful assistants.

When Transfer Fails

Transfer learning works best when the source and target domains are related. A model pre-trained on English text transfers well to French (similar structure) but poorly to protein sequences (completely different domain). When domains are too different, transfer can actually hurt performance ("negative transfer"). Domain-specific pre-training (like BioGPT for biomedical text or CodeLlama for code) addresses this by pre-training on domain-relevant data.

Conceptos relacionados

← Todos los términos
← Tool Use Transformer →