Zubnet AIAprenderWiki › Word Embedding
Fundamentos

Word Embedding

Word2Vec, GloVe, Word Vectors
Representações vetoriais densas de palavras onde palavras com significados similares têm vetores similares. Word2Vec (2013) e GloVe (2014) foram pioneiros nisso: treinam em padrões de coocorrência de palavras para produzir vetores onde “rei − homem + mulher ≈ rainha”. Word embeddings foram os precursores dos embeddings contextuais modernos (BERT, sentence-transformers) e continuam fundacionais para entender como redes neurais representam linguagem.

Por que importa

Word embeddings foram o avanço que tornou NLP neural prático. Antes deles, palavras eram representadas como vetores one-hot (sem noção de similaridade). Word embeddings provaram que representações distribuídas podiam capturar significado, analogia e relações semânticas. Essa ideia — representar símbolos discretos como vetores contínuos aprendidos — é a fundação de todos os modelos de linguagem modernos.

Deep Dive

Word2Vec (Mikolov et al., 2013, Google) trains by either predicting a word from its context (CBOW) or predicting context from a word (Skip-gram). GloVe (Pennington et al., 2014, Stanford) factorizes the word co-occurrence matrix. Both produce similar results: 100–300 dimensional vectors where cosine similarity correlates with semantic similarity. These vectors capture remarkable relationships: countries map to capitals, verbs map to tenses, and analogies are solvable through vector arithmetic.

Static vs. Contextual

Word2Vec and GloVe produce one vector per word, regardless of context. "Bank" gets the same embedding whether it means "river bank" or "financial bank." Contextual embeddings (ELMo, then BERT) solved this by producing different representations depending on context. Modern sentence embeddings (from models like BGE, E5) go further, embedding entire sentences into vectors. Each generation improved on the last, but the core idea — meaning as a vector — started with Word2Vec.

The Legacy

Word2Vec's biggest contribution wasn't the algorithm but the demonstration that neural networks can learn useful representations of language from raw text. This proof of concept inspired the progression from word vectors to sentence vectors to contextual embeddings to full language models. The embedding layer of every LLM is a direct descendant of word embeddings: a lookup table mapping discrete tokens to learned continuous vectors, just at a much larger scale.

Conceitos relacionados

← Todos os termos
← Weights & Biases World Model →