Zubnet AILearnWiki › Layer
Fundamentals

Layer

Hidden Layer, Neural Network Layer
A group of neurons that processes data at a specific level of abstraction in a neural network. The input layer receives raw data. Hidden layers (the middle ones) learn increasingly abstract representations. The output layer produces the final result. "Deep" learning means many hidden layers — modern LLMs have 32 to 128+ layers.

Why it matters

Layers create the hierarchy that makes deep learning powerful. Early layers learn simple patterns (edges in images, word fragments in text). Middle layers combine these into concepts (faces, phrases). Deep layers combine concepts into high-level understanding (scene recognition, reasoning). The depth of a network determines the complexity of patterns it can learn.

Deep Dive

In a Transformer, each layer (called a "block") consists of two sub-layers: a multi-head attention layer (which mixes information across tokens) and a feedforward network (which processes each token independently). Each sub-layer has a residual connection (the input is added back to the output) and normalization. A 32-layer Transformer applies this attention+FFN pattern 32 times, each time refining the representation.

What Happens Across Layers

Research has revealed a rough pattern in LLMs: early layers handle syntax and surface patterns, middle layers handle semantic meaning and entity recognition, and late layers handle task-specific reasoning and output formatting. This isn't a hard boundary — information flows through all layers via residual connections — but it explains why some fine-tuning techniques only modify certain layers and why pruning middle layers often hurts more than pruning early or late ones.

Width vs. Depth

A network's "width" is the number of neurons per layer (the model dimension). Its "depth" is the number of layers. Both matter, but they contribute differently: wider layers can represent more features simultaneously, while deeper networks can learn more complex, compositional patterns. Modern LLMs tend to be both wide (dimensions of 4096–8192) and deep (32–128 layers). Scaling laws suggest that width and depth should be scaled together for optimal performance.

Related Concepts

← All Terms
← Latency Learning Rate Schedule →