Zubnet AIAprenderWiki › Machine Translation
Using AI

Machine Translation

MT, Neural Machine Translation, NMT
Traducir automáticamente texto de un idioma a otro. La traducción automática neuronal (NMT) moderna usa Transformers encoder-decoder entrenados en corpus paralelos (textos y sus traducciones). Google Translate, DeepL y la traducción basada en LLM usan todas variantes de este enfoque. La calidad ha mejorado dramáticamente — para pares de idiomas comunes, la MT se acerca a la traducción humana profesional para contenido de rutina.

Por qué importa

La traducción automática rompe las barreras del idioma a escala. Habilita el comercio global, la búsqueda multilingüe, la comunicación en tiempo real y el acceso a información a través de idiomas. Para la IA específicamente, la MT es cómo los modelos entrenados principalmente en inglés pueden servir a usuarios en 100+ idiomas — y es por qué la eficiencia del tokenizer multilingüe importa para el costo.

Deep Dive

Modern NMT uses the encoder-decoder Transformer architecture: the encoder processes the source sentence, and the decoder generates the target sentence token by token, attending to the encoded source through cross-attention. Training requires parallel corpora — millions of sentence pairs in both languages. Data quality and domain match are critical: a model trained on EU Parliament proceedings translates legal text well but informal chat poorly.

LLMs as Translators

Large language models have become competitive translators, sometimes exceeding dedicated MT systems for high-resource language pairs. Their advantage: they understand context, idioms, and cultural nuances better because they've seen language used in diverse contexts. Their disadvantage: they're much slower and more expensive per sentence than dedicated MT models. For real-time translation of millions of sentences, dedicated models (like those behind Google Translate) are necessary. For quality-critical translation of smaller volumes, LLMs often produce more natural results.

The Long Tail of Languages

MT quality varies enormously across language pairs. English-French, English-Spanish, and English-Chinese are well-served (abundant training data). But for the world's 7,000+ languages, most pairs have little or no parallel training data. Low-resource translation remains an active research area, with approaches including: zero-shot translation through multilingual models, back-translation (using the MT system itself to generate synthetic training data), and transfer learning from related languages.

Conceptos relacionados

← Todos los términos
ESC