Zubnet AIAprenderWiki › Word Embedding
Fundamentos

Word Embedding

Word2Vec, GloVe, Word Vectors
Representaciones vectoriales densas de palabras donde las palabras con significados similares tienen vectores similares. Word2Vec (2013) y GloVe (2014) pionearon esto: se entrenan en patrones de coocurrencia de palabras para producir vectores donde «rey − hombre + mujer ≈ reina». Los word embeddings fueron los precursores de los embeddings contextuales modernos (BERT, sentence-transformers) y siguen siendo fundacionales para entender cómo las redes neuronales representan el lenguaje.

Por qué importa

Los word embeddings fueron el avance que hizo práctico el NLP neuronal. Antes de ellos, las palabras se representaban como vectores one-hot (sin noción de similitud). Los word embeddings probaron que las representaciones distribuidas podían capturar significado, analogía y relaciones semánticas. Esta idea — representar símbolos discretos como vectores continuos aprendidos — es el fundamento de todos los modelos de lenguaje modernos.

Deep Dive

Word2Vec (Mikolov et al., 2013, Google) trains by either predicting a word from its context (CBOW) or predicting context from a word (Skip-gram). GloVe (Pennington et al., 2014, Stanford) factorizes the word co-occurrence matrix. Both produce similar results: 100–300 dimensional vectors where cosine similarity correlates with semantic similarity. These vectors capture remarkable relationships: countries map to capitals, verbs map to tenses, and analogies are solvable through vector arithmetic.

Static vs. Contextual

Word2Vec and GloVe produce one vector per word, regardless of context. "Bank" gets the same embedding whether it means "river bank" or "financial bank." Contextual embeddings (ELMo, then BERT) solved this by producing different representations depending on context. Modern sentence embeddings (from models like BGE, E5) go further, embedding entire sentences into vectors. Each generation improved on the last, but the core idea — meaning as a vector — started with Word2Vec.

The Legacy

Word2Vec's biggest contribution wasn't the algorithm but the demonstration that neural networks can learn useful representations of language from raw text. This proof of concept inspired the progression from word vectors to sentence vectors to contextual embeddings to full language models. The embedding layer of every LLM is a direct descendant of word embeddings: a lookup table mapping discrete tokens to learned continuous vectors, just at a much larger scale.

Conceptos relacionados

← Todos los términos
← Weights & Biases World Model →