Zubnet AIसीखेंWiki › Context Length Extension
Infrastructure

Context Length Extension

YaRN, NTK Scaling, RoPE Scaling
वो techniques जो language models को training के दौरान देखी गई sequences से longer sequences handle करने में enable करती हैं। 4K tokens पर trained एक model को उसके positional encoding (typically RoPE) में modifications के through 32K या 128K तक extend किया जा सकता है, longer sequences पर short fine-tuning combine करते हुए। ये long sequences पर scratch से train करने की enormous cost से बचता है।

यह क्यों matter करता है

Context length extension ही वजह है कि models सिर्फ दो सालों में 4K से 128K से 1M+ context windows तक गए। Million-token sequences पर scratch से एक model train करने की cost prohibitive होती। Extension techniques long-context models को practical बनाती हैं shorter sequences पर trained models को adapt करके, सिर्फ original training compute का एक fraction require करते हुए।

Deep Dive

The core challenge: RoPE (Rotary Position Embeddings) encodes position using rotation angles. At positions beyond the training length, these angles become extrapolations that the model has never seen, causing attention patterns to break down. Extension techniques modify how positions map to rotation angles so that longer sequences produce angles within the model's trained range.

NTK-Aware Scaling

NTK-aware interpolation (Neural Tangent Kernel) adjusts RoPE frequencies non-uniformly: high-frequency components (important for local patterns) are preserved while low-frequency components (position-dependent) are interpolated. This preserves the model's ability to handle local patterns (word order, syntax) while extending its range for global position encoding. It's a one-line code change that dramatically improves length extrapolation.

YaRN

YaRN (Yet another RoPE extensioN) combines NTK-aware interpolation with an attention temperature correction and a small amount of fine-tuning on extended-length data (typically a few hundred steps). This produces models that handle 4–8x their original context length with minimal quality degradation. Most open-source long-context models (like long-context Llama or Mistral variants) use YaRN or similar techniques. The fine-tuning step is crucial — scaling alone works somewhat, but fine-tuning at the target length significantly improves quality.

संबंधित अवधारणाएँ

← सभी Terms
← Content Moderation Context Window →