Zubnet AI学习Wiki › Word Embedding
基础

Word Embedding

Word2Vec, GloVe, Word Vectors
词的密集向量表示,意思相近的词有相近的向量。Word2Vec(2013)和 GloVe(2014)开创了这个:它们在词共现模式上训练,产出这样的向量 — “国王 − 男人 + 女人 ≈ 女王”。词嵌入是现代上下文嵌入(BERT、sentence-transformers)的先驱,至今仍然是理解神经网络如何表示语言的基础。

为什么重要

词嵌入是让神经 NLP 实用的突破。在它之前,词用 one-hot 向量表示(没有相似性概念)。词嵌入证明了分布式表示可以捕捉含义、类比、语义关系。这个洞见 — 把离散符号表示为学到的连续向量 — 是所有现代语言模型的基础。

Deep Dive

Word2Vec (Mikolov et al., 2013, Google) trains by either predicting a word from its context (CBOW) or predicting context from a word (Skip-gram). GloVe (Pennington et al., 2014, Stanford) factorizes the word co-occurrence matrix. Both produce similar results: 100–300 dimensional vectors where cosine similarity correlates with semantic similarity. These vectors capture remarkable relationships: countries map to capitals, verbs map to tenses, and analogies are solvable through vector arithmetic.

Static vs. Contextual

Word2Vec and GloVe produce one vector per word, regardless of context. "Bank" gets the same embedding whether it means "river bank" or "financial bank." Contextual embeddings (ELMo, then BERT) solved this by producing different representations depending on context. Modern sentence embeddings (from models like BGE, E5) go further, embedding entire sentences into vectors. Each generation improved on the last, but the core idea — meaning as a vector — started with Word2Vec.

The Legacy

Word2Vec's biggest contribution wasn't the algorithm but the demonstration that neural networks can learn useful representations of language from raw text. This proof of concept inspired the progression from word vectors to sentence vectors to contextual embeddings to full language models. The embedding layer of every LLM is a direct descendant of word embeddings: a lookup table mapping discrete tokens to learned continuous vectors, just at a much larger scale.

相关概念

← 所有术语
← Weights & Biases World Model →