Zubnet AIसीखेंWiki › Embedding Layer
मूल सिद्धांत

Embedding Layer

Token Embedding, Embedding Table, Lookup Table
एक lookup table जो vocabulary के हर token को एक dense vector (उस token का embedding) पर map करती है। जब model को token ID 42 मिलता है, embedding layer एक learned matrix की row 42 return करती है। ये vector उस token का model का initial representation है — attention और feedforward layers के through सभी subsequent processing का starting point।

यह क्यों matter करता है

Embedding layer वो जगह है जहाँ text math बन जाता है। हर LLM discrete tokens (words, subwords) को continuous vectors में convert करके शुरू होता है जिन्हें neural network process कर सके। Embedding table small models के सबसे बड़े components में से एक भी है — 4096-dimensional embeddings के साथ एक 128K vocabulary 512 million parameters है। ये समझना आपको model sizes और vocabulary design के बारे में reason करने में help करता है।

Deep Dive

The embedding layer is just a matrix E of shape (vocab_size, model_dim). For token ID i, the embedding is E[i] — a simple row lookup, no computation. But these embeddings are learned during training: tokens that appear in similar contexts get similar embeddings. The classic example: the embeddings for "king" − "man" + "woman" ≈ "queen," showing that the embedding space captures semantic relationships.

Tied Embeddings

Many models share (tie) the embedding matrix with the output layer (the "unembedding" or "language model head"). The output layer converts hidden states back into vocabulary probabilities by computing a dot product with each token's embedding. Tying these layers means the same embedding both represents a token on input and predicts it on output, saving parameters and often improving quality. Most modern LLMs use tied embeddings.

Positional + Token Embeddings

The full input representation is typically: token_embedding + positional_encoding. The token embedding captures what the token means. The positional encoding captures where it appears in the sequence. In models with learned position embeddings (BERT), this is a second embedding table indexed by position. In models with RoPE (LLaMA), positional information is injected differently (by rotating Q and K vectors), and the embedding layer only handles token identity.

संबंधित अवधारणाएँ

← सभी Terms
← Embedding Emergence →