Zubnet AILearnWiki › Retrieval
Fundamentals

Retrieval

Information Retrieval, IR
The process of finding relevant documents, passages, or data from a large collection in response to a query. In AI, retrieval is the "R" in RAG — the step where relevant context is fetched before being given to a language model. Retrieval can use keyword matching (BM25), semantic similarity (embeddings), or hybrid approaches combining both.

Why it matters

Retrieval is what makes LLMs practical for real-world applications. A model's internal knowledge is static, incomplete, and sometimes wrong. Retrieval gives it access to current, accurate, domain-specific information at inference time. The quality of your retrieval pipeline directly determines the quality of your RAG system — the best LLM can't produce good answers from bad context.

Deep Dive

Traditional retrieval (BM25, TF-IDF) matches query keywords against document keywords, weighted by frequency and importance. It's fast, interpretable, and excellent for exact matches. Semantic retrieval encodes queries and documents as embeddings and finds nearest neighbors in vector space. It handles paraphrase and conceptual similarity but can miss exact keyword matches. Hybrid retrieval combines both, typically using reciprocal rank fusion to merge results.

Chunking Strategy

For RAG, documents must be split into chunks before embedding. Chunk size is a critical design decision: too small and you lose context, too large and you dilute relevant information with noise. Common strategies include fixed-size chunks with overlap, sentence-level splitting, paragraph-level splitting, and recursive splitting that respects document structure (headers, sections). The optimal approach depends on your documents and queries.

Reranking

A common pattern: retrieve a broad set of candidates (say 50) using fast retrieval, then rerank them using a more accurate (but slower) model. Cross-encoder rerankers (like Cohere Rerank or BGE-Reranker) process query-document pairs together, producing more accurate relevance scores than embedding similarity alone. This two-stage pipeline balances speed (fast initial retrieval) with accuracy (precise reranking of the top candidates).

Related Concepts

← All Terms
← Residual Connection Reward Model →