Zubnet AI學習Wiki › LangChain
工具

LangChain

一個流行的開源框架,用來建構基於語言模型的應用。LangChain 為常見模式提供抽象:連接 LLM 到資料來源(RAG)、建構多步 LLM 呼叫鏈、管理對話記憶、使用工具、編排 agent。它透過統一的介面支援多個供應商(Anthropic、OpenAI、本地模型)。

為什麼重要

LangChain 是使用最廣泛的 LLM 應用框架,也就是說你會在教學、職位描述、現有程式碼庫裡遇到它。它也有爭議 — 批評者認為它在簡單的 API 呼叫之上加了不必要的抽象。理解 LangChain 做什麼(以及什麼時候用它 vs. 直接 API 呼叫),能幫你做出有根據的架構決定。

Deep Dive

LangChain's core abstractions: Models (unified interface to LLM providers), Prompts (templates with variables), Chains (sequences of LLM calls and processing steps), Agents (LLMs that decide which tools to use), Memory (conversation state management), and Retrievers (connections to vector databases and other data sources). These compose: a RAG chain connects a retriever to a model via a prompt template.

The Controversy

LangChain is divisive in the developer community. Proponents value the unified abstractions, the breadth of integrations, and the speed of prototyping. Critics argue that the abstractions are leaky (you need to understand the underlying APIs anyway), the code is hard to debug (too many layers between you and the API call), and that simple applications are better served by direct API calls. The consensus seems to be: LangChain is good for prototyping and complex multi-step workflows, but simple applications often don't need it.

LangGraph and LangSmith

The LangChain ecosystem expanded beyond the core library. LangGraph handles complex agent workflows as state machines (better for multi-step agents than linear chains). LangSmith provides observability — tracing, evaluation, and monitoring for LLM applications. The ecosystem addresses real needs, but the complexity of the full stack is a valid concern for teams that need to maintain and debug these systems in production.

相關概念

← 所有術語
← KV Cache Large Language Model →