Zubnet AIAprenderWiki › Question Answering
Using AI

Question Answering

QA, Reading Comprehension
Un sistema que responde preguntas hechas en lenguaje natural. QA extractivo encuentra la extensión de la respuesta dentro de un documento dado («Según el párrafo 3, la respuesta es...»). QA generativo sintetiza una respuesta de una o más fuentes. QA de dominio abierto responde cualquier pregunta sin un documento específico. QA basado en RAG recupera documentos relevantes y genera respuestas a partir de ellos.

Por qué importa

El question-answering es el patrón de interacción fundamental para los asistentes de IA. Cada chatbot, cada base de conocimiento empresarial, cada bot de soporte al cliente es esencialmente un sistema QA. Entender los distintos paradigmas QA (extractivo, generativo, retrieval-augmented) te ayuda a elegir la arquitectura correcta para tu aplicación y establecer expectativas realistas sobre precisión.

Deep Dive

Extractive QA (the SQuAD paradigm): given a document and a question, identify the exact span of text that answers the question. Fine-tuned BERT models excel at this — they read the document, understand the question, and highlight the answer. This is fast, accurate, and verifiable (the answer is always a direct quote). But it can only answer questions whose answers appear verbatim in the document.

RAG-Based QA

The dominant modern pattern: (1) user asks a question, (2) retrieve relevant documents from a knowledge base using semantic search, (3) include the retrieved documents in the LLM's context, (4) the LLM generates an answer based on the retrieved context. This combines the precision of retrieval with the fluency of generation. The key challenges are retrieval quality (finding the right documents) and faithfulness (generating answers that accurately reflect the source material).

Evaluation

QA accuracy is measured differently for each paradigm. Extractive QA uses exact match (EM) and F1 score against ground-truth answer spans. Generative QA is harder to evaluate automatically — multiple valid phrasings exist for any answer. RAGAS and similar frameworks evaluate RAG-based QA on faithfulness (does the answer match the source?), relevance (did you retrieve the right documents?), and answer quality. Human evaluation remains the gold standard for generative QA.

Conceptos relacionados

← Todos los términos
ESC