Zubnet AIAprenderWiki › Activation Function
Fundamentos

Activation Function

ReLU, GELU, SiLU, Swish
Una función matemática aplicada a la salida de una neurona que introduce no-linealidad en la red. Sin funciones de activación, una red neuronal — sin importar cuán profunda — solo podría aprender relaciones lineales. ReLU, GELU y SiLU/Swish son las más comunes en arquitecturas modernas.

Por qué importa

Las funciones de activación son la razón por la que el deep learning funciona. Una pila de transformaciones lineales es solo una gran transformación lineal. Las funciones de activación entre capas dejan que la red aprenda patrones complejos y no lineales — las curvas, bordes y relaciones sutiles que hacen poderosas a las redes neuronales.

Deep Dive

ReLU (Rectified Linear Unit) is the simplest: f(x) = max(0, x). It outputs zero for negative inputs and passes positive inputs unchanged. ReLU solved the vanishing gradient problem that plagued earlier activation functions (sigmoid, tanh) by providing a constant gradient of 1 for positive inputs. Its simplicity and effectiveness made it the default for over a decade.

Beyond ReLU

GELU (Gaussian Error Linear Unit) is now the standard in Transformers (used by BERT, GPT, and most LLMs). Unlike ReLU's hard cutoff at zero, GELU smoothly tapers near zero, which provides better gradient flow. SiLU/Swish (x · sigmoid(x)) is similar and used in some architectures like LLaMA. The practical differences between GELU and SiLU are small — both outperform ReLU in Transformer-scale models.

GLU Variants

Modern LLMs often use Gated Linear Units (GLU) and their variants (SwiGLU, GeGLU) in feed-forward layers. These multiply two parallel linear projections together, effectively letting the network gate what information passes through. SwiGLU (used in LLaMA, Mistral, and many others) combines SiLU activation with gating and consistently improves over standard feed-forward layers at the cost of slightly more parameters.

Conceptos relacionados

← Todos los términos
Agent →