Zubnet AIAprenderWiki › SwiGLU
Fundamentos

SwiGLU

Gated Linear Unit, GLU Variants
Una función de activación gated usada en las capas feedforward de Transformers modernos. SwiGLU combina la activación SiLU/Swish con un mecanismo de gating: SwiGLU(x) = (x · W1 · SiLU) ⊗ (x · W3), donde ⊗ es multiplicación elemento a elemento. Esto deja que la red aprenda qué información pasar, superando consistentemente a capas feedforward ReLU o GELU estándar.

Por qué importa

SwiGLU es la activación feedforward usada por LLaMA, Mistral, Qwen, Gemma y la mayoría de LLMs modernos. Entenderla te ayuda a leer arquitecturas de modelos y explica por qué las capas FFN modernas tienen tres matrices de pesos en vez de dos. Es una pequeña elección arquitectónica con impacto desmedido en la calidad del modelo.

Deep Dive

Standard FFN: FFN(x) = W2 · GELU(W1 · x). Two weight matrices, one activation. SwiGLU FFN: SwiGLU(x) = W2 · (SiLU(W1 · x) ⊗ W3 · x). Three weight matrices, a gating mechanism. The gate (W3 · x) controls what passes through, letting the network selectively suppress or amplify different features. To keep parameter count constant, the intermediate dimension is typically reduced from 4×model_dim to (8/3)×model_dim.

Why Gating Helps

Gating gives the network a multiplicative interaction that standard activations lack. Standard activations apply a fixed non-linearity. Gating applies a learned, input-dependent non-linearity. This additional expressiveness helps the network learn more complex functions per layer, which means you need fewer layers (or smaller layers) for equivalent performance. Shazeer (2020) showed that GLU variants consistently outperform standard FFN across model sizes.

The GLU Family

SwiGLU is one of several GLU variants: GeGLU (uses GELU instead of SiLU), ReGLU (uses ReLU), and the original GLU (uses sigmoid). SwiGLU and GeGLU perform similarly and both outperform ReGLU. The choice between them is mostly empirical — SwiGLU has become the default through convention (LLaMA adopted it, others followed) rather than clear theoretical superiority over GeGLU.

Conceptos relacionados

← Todos los términos
← Supervised Aprendering Sycophancy →