Zubnet AIAprenderWiki › Layer
Fundamentos

Layer

Hidden Layer, Neural Network Layer
Un grupo de neuronas que procesa datos a un nivel específico de abstracción en una red neuronal. La capa de entrada recibe datos crudos. Las capas ocultas (las del medio) aprenden representaciones cada vez más abstractas. La capa de salida produce el resultado final. Aprendizaje «profundo» significa muchas capas ocultas — los LLMs modernos tienen de 32 a 128+ capas.

Por qué importa

Las capas crean la jerarquía que hace poderoso al deep learning. Las capas tempranas aprenden patrones simples (bordes en imágenes, fragmentos de palabras en texto). Las capas del medio combinan estos en conceptos (caras, frases). Las capas profundas combinan conceptos en comprensión de alto nivel (reconocimiento de escenas, razonamiento). La profundidad de una red determina la complejidad de los patrones que puede aprender.

Deep Dive

In a Transformer, each layer (called a "block") consists of two sub-layers: a multi-head attention layer (which mixes information across tokens) and a feedforward network (which processes each token independently). Each sub-layer has a residual connection (the input is added back to the output) and normalization. A 32-layer Transformer applies this attention+FFN pattern 32 times, each time refining the representation.

What Happens Across Layers

Research has revealed a rough pattern in LLMs: early layers handle syntax and surface patterns, middle layers handle semantic meaning and entity recognition, and late layers handle task-specific reasoning and output formatting. This isn't a hard boundary — information flows through all layers via residual connections — but it explains why some fine-tuning techniques only modify certain layers and why pruning middle layers often hurts more than pruning early or late ones.

Width vs. Depth

A network's "width" is the number of neurons per layer (the model dimension). Its "depth" is the number of layers. Both matter, but they contribute differently: wider layers can represent more features simultaneously, while deeper networks can learn more complex, compositional patterns. Modern LLMs tend to be both wide (dimensions of 4096–8192) and deep (32–128 layers). Scaling laws suggest that width and depth should be scaled together for optimal performance.

Conceptos relacionados

← Todos los términos
← Latency Aprendering Rate Schedule →