Zubnet AIसीखेंWiki › BERT
Models

BERT

Bidirectional Encoder Representations from Transformers
Google का एक Transformer-based model (2018) जिसने NLP को bidirectional pre-training introduce करके revolutionize किया — हर token हर दूसरे token पर attend कर सकता है, model को deep contextual understanding देते हुए। BERT एक encoder-only model है: ये text समझने में excel करता है (classification, search, NER), लेकिन GPT या Claude की तरह text generate नहीं कर सकता।

यह क्यों matter करता है

BERT modern era का सबसे influential NLP paper है। इसने prove किया कि unlabeled text पर pre-train करना और फिर specific tasks पर fine-tune करना हर existing benchmark को crush कर सकता है। भले ही LLMs ने spotlight चुरा ली है, BERT-style models अभी भी अधिकांश production search engines, embedding systems, और classification pipelines को power करते हैं क्योंकि वो non-generative tasks के लिए LLMs से smaller, faster, और cheaper हैं।

Deep Dive

BERT's training uses two objectives: Masked Language Modeling (MLM) — randomly mask 15% of tokens and predict them from context — and Next Sentence Prediction (NSP) — predict whether two sentences are consecutive. MLM forces bidirectional understanding because the model must use both left and right context to predict masked words. This is fundamentally different from GPT's left-to-right approach.

Why BERT Still Matters

In the LLM era, BERT-family models (RoBERTa, DeBERTa, DistilBERT) remain the backbone of production NLP. They're 100x smaller than LLMs (110M–340M parameters vs. billions), 10x faster for inference, and often better for tasks that don't require generation. Most embedding models used in RAG and semantic search are BERT descendants. Google Search used BERT extensively before transitioning to larger models.

BERT vs. GPT: The Architecture Split

BERT (encoder-only, bidirectional) and GPT (decoder-only, left-to-right) represent two philosophies. BERT sees the whole input at once — perfect for understanding. GPT sees only what came before — perfect for generating. The field initially thought encoder-decoder (T5) would win by combining both. Instead, decoder-only (GPT approach) won for LLMs because it scales more cleanly, and you can approximate bidirectional understanding through clever prompting.

संबंधित अवधारणाएँ

← सभी Terms
← Benchmark Bias →