Modern OCR pipelines have two stages: detection (finding text regions using models like CRAFT or DBNet) and recognition (reading text in each region using CRNN or Transformer-based models). End-to-end approaches (like PaddleOCR, EasyOCR) combine both stages. For structured documents, specialized models (LayoutLM, Donut) understand both text content and spatial layout, recognizing that "Total: $42.50" on an invoice means something different from the same text in a paragraph.
Multimodal LLMs (Claude, GPT-4V, Gemini) have become remarkably good at OCR as a side effect of their vision capabilities. You can upload an image and ask "read all text in this image" or "extract the table from this receipt." For complex documents with mixed layouts, handwriting, and multiple languages, vision LLMs often outperform dedicated OCR systems because they understand context and can handle ambiguity. The trade-off is speed and cost — dedicated OCR is 100x faster for bulk processing.
Remaining hard problems: handwriting recognition (especially cursive or messy handwriting), degraded historical documents, text in complex backgrounds (wild text on signs, clothing, products), and scripts with complex character compositions (Chinese, Arabic, Devanagari). Accuracy varies significantly by language and script — Latin script OCR is nearly solved, but CJK and right-to-left scripts still have meaningful error rates.