Zubnet AI學習Wiki › Mamba
Models

Mamba

Mamba Architecture
作為 Transformer 替代設計的 selective state space model(SSM)架構。由 Albert Gu 和 Tri Dao 創造,Mamba 以序列長度的線性 scaling(對比 Transformer 注意力的平方級代價)實現有競爭力的語言建模性能。它透過維持一個被選擇性更新的壓縮隱藏狀態來處理序列 — 重要資訊被保留,不相關資訊衰減。

為什麼重要

Mamba 代表對 Transformer 主導地位最可信的挑戰。如果它(或它的後代)兌現了線性時間序列處理達到 Transformer 品質的承諾,意義巨大:更長的上下文視窗、更快的推理、更低的成本。「selective」那部分是關鍵 — 不像早期 SSM,Mamba 讓它的狀態轉換依賴輸入,這就是它有表達力匹配注意力的原因。

Deep Dive

Classical state space models maintain a fixed-size hidden state that gets updated at each timestep via learned matrices A (state transition), B (input projection), and C (output projection). Mamba's innovation is making B and C input-dependent — the model learns to selectively focus on or ignore different parts of the input based on content, not just position. This selectivity is what earlier SSMs lacked and what prevented them from matching Transformer performance on language tasks.

The Hardware Story

Mamba's other contribution is a hardware-aware implementation. The selective scan operation is rewritten to minimize memory transfers between GPU HBM and SRAM, using kernel fusion and recomputation to avoid materializing the full state expansion in memory. This engineering makes the theoretical linear complexity translate to actual wall-clock speedups, not just asymptotic improvements that get eaten by constant factors.

Mamba-2 and Hybrids

Mamba-2 simplified the architecture by showing that the selective state space model can be viewed as a structured form of attention, unifying the SSM and Transformer perspectives mathematically. This led to hybrid architectures (like Jamba from AI21, Zamba from Zyphra) that interleave Mamba layers with attention layers, getting the efficiency of SSMs for most of the sequence processing while using attention for the tasks where global token interaction is essential. The debate isn't "SSM vs. Transformer" anymore — it's about finding the optimal mix.

相關概念

← 所有術語
← Machine 學習ing Masked Language Modeling →