Zubnet AIApprendreWiki › Machine Translation
Using AI

Machine Translation

MT, Neural Machine Translation, NMT
Traduire automatiquement du texte d'une langue à une autre. La traduction automatique neuronale (NMT) moderne utilise des Transformers encoder-decoder entraînés sur des corpus parallèles (textes et leurs traductions). Google Translate, DeepL et la traduction basée sur LLM utilisent tous des variantes de cette approche. La qualité s'est dramatiquement améliorée — pour les paires de langues courantes, la MT approche la traduction humaine professionnelle pour le contenu de routine.

Pourquoi c'est important

La traduction automatique brise les barrières linguistiques à l'échelle. Elle permet le commerce global, la recherche cross-langue, la communication en temps réel et l'accès à l'information à travers les langues. Pour l'IA spécifiquement, la MT est comment les modèles entraînés principalement sur l'anglais peuvent servir des utilisateurs dans 100+ langues — et c'est pourquoi l'efficacité du tokenizer multilingue compte pour le coût.

Deep Dive

Modern NMT uses the encoder-decoder Transformer architecture: the encoder processes the source sentence, and the decoder generates the target sentence token by token, attending to the encoded source through cross-attention. Training requires parallel corpora — millions of sentence pairs in both languages. Data quality and domain match are critical: a model trained on EU Parliament proceedings translates legal text well but informal chat poorly.

LLMs as Translators

Large language models have become competitive translators, sometimes exceeding dedicated MT systems for high-resource language pairs. Their advantage: they understand context, idioms, and cultural nuances better because they've seen language used in diverse contexts. Their disadvantage: they're much slower and more expensive per sentence than dedicated MT models. For real-time translation of millions of sentences, dedicated models (like those behind Google Translate) are necessary. For quality-critical translation of smaller volumes, LLMs often produce more natural results.

The Long Tail of Languages

MT quality varies enormously across language pairs. English-French, English-Spanish, and English-Chinese are well-served (abundant training data). But for the world's 7,000+ languages, most pairs have little or no parallel training data. Low-resource translation remains an active research area, with approaches including: zero-shot translation through multilingual models, back-translation (using the MT system itself to generate synthetic training data), and transfer learning from related languages.

Concepts liés

← Tous les termes
ESC