Zubnet AIसीखेंWiki › Multi-Head Attention
मूल सिद्धांत

Multi-Head Attention

MHA
Multiple attention operations parallel में चलाना, हर एक के पास queries, keys, और values का अपना learned projection होता है। एक attention function पूरी model dimension देखने के बजाय, multi-head attention dimension को multiple “heads” में बाँट देती है (जैसे 4096-dimension model के लिए 32 heads, हर एक 128-dimension का)। हर head अलग types के relationships पर एक साथ focus कर सकता है।

यह क्यों matter करता है

Multi-head attention ही वजह है कि Transformers इतने expressive हैं। एक head syntactic relationships (subject-verb) पर focus कर सकता है, दूसरा positional patterns पर (nearby words), तीसरा semantic similarity पर। ये parallel specialization model को एक साथ कई तरह की dependencies capture करने देती है, जो एक अकेला attention head उतनी effectively नहीं कर सकता।

Deep Dive

The mechanism: for each head i, the model learns separate projection matrices W_Q^i, W_K^i, W_V^i that project the input into a lower-dimensional space (head_dim = model_dim / num_heads). Each head independently computes attention: softmax(Q_i · K_i^T / √d) · V_i. The outputs of all heads are concatenated and projected back to the full model dimension through a final linear layer W_O.

Head Specialization

Research shows that different heads learn different functions. Some heads attend to the previous token (positional). Some attend to syntactically related tokens (subject to its verb). Some implement "induction" (pattern completion). Some attend broadly (gathering global context). Not all heads are equally important — pruning 20–40% of heads often has minimal impact on performance, suggesting significant redundancy.

GQA and MQA

Multi-Query Attention (MQA) uses a single key-value head shared across all query heads, reducing KV cache size by the number of heads. Grouped-Query Attention (GQA) is a middle ground: groups of query heads share a key-value head (e.g., 32 query heads with 8 KV heads). GQA preserves most of MHA's quality while dramatically reducing memory for KV cache. Llama 2 70B, Mistral, and most modern LLMs use GQA.

संबंधित अवधारणाएँ

← सभी Terms
← Multi-Agent Systems Multimodal →