Zubnet AI学习Wiki › Embedding Layer
基础

Embedding Layer

Token Embedding, Embedding Table, Lookup Table
一个查找表,把词表里每个 token 映射到一个密集向量(那个 token 的 embedding)。当模型收到 token ID 42,embedding layer 返回一个学到的矩阵的第 42 行。这个向量是模型对那个 token 的初始表示 — 通过后续 attention 和前馈层所有处理的起点。

为什么重要

Embedding layer 是文本变成数学的地方。每个 LLM 都从把离散 token(词、子词)转换成神经网络能处理的连续向量开始。Embedding 表也是小模型最大的组件之一 — 128K 词表加 4096 维 embedding 就是 5.12 亿参数。理解这个能帮你推理模型大小和词表设计。

Deep Dive

The embedding layer is just a matrix E of shape (vocab_size, model_dim). For token ID i, the embedding is E[i] — a simple row lookup, no computation. But these embeddings are learned during training: tokens that appear in similar contexts get similar embeddings. The classic example: the embeddings for "king" − "man" + "woman" ≈ "queen," showing that the embedding space captures semantic relationships.

Tied Embeddings

Many models share (tie) the embedding matrix with the output layer (the "unembedding" or "language model head"). The output layer converts hidden states back into vocabulary probabilities by computing a dot product with each token's embedding. Tying these layers means the same embedding both represents a token on input and predicts it on output, saving parameters and often improving quality. Most modern LLMs use tied embeddings.

Positional + Token Embeddings

The full input representation is typically: token_embedding + positional_encoding. The token embedding captures what the token means. The positional encoding captures where it appears in the sequence. In models with learned position embeddings (BERT), this is a second embedding table indexed by position. In models with RoPE (LLaMA), positional information is injected differently (by rotating Q and K vectors), and the embedding layer only handles token identity.

相关概念

← 所有术语
← Embedding Emergence →