Zubnet AI学习Wiki › LangChain
工具

LangChain

一个流行的开源框架,用来构建基于语言模型的应用。LangChain 为常见模式提供抽象:连接 LLM 到数据源(RAG)、构建多步 LLM 调用链、管理对话内存、使用工具、编排 agent。它通过统一的接口支持多个供应商(Anthropic、OpenAI、本地模型)。

为什么重要

LangChain 是使用最广泛的 LLM 应用框架,也就是说你会在教程、职位描述、现有代码库里遇到它。它也有争议 — 批评者认为它在简单的 API 调用之上加了不必要的抽象。理解 LangChain 做什么(以及什么时候用它 vs. 直接 API 调用),能帮你做出有根据的架构决定。

Deep Dive

LangChain's core abstractions: Models (unified interface to LLM providers), Prompts (templates with variables), Chains (sequences of LLM calls and processing steps), Agents (LLMs that decide which tools to use), Memory (conversation state management), and Retrievers (connections to vector databases and other data sources). These compose: a RAG chain connects a retriever to a model via a prompt template.

The Controversy

LangChain is divisive in the developer community. Proponents value the unified abstractions, the breadth of integrations, and the speed of prototyping. Critics argue that the abstractions are leaky (you need to understand the underlying APIs anyway), the code is hard to debug (too many layers between you and the API call), and that simple applications are better served by direct API calls. The consensus seems to be: LangChain is good for prototyping and complex multi-step workflows, but simple applications often don't need it.

LangGraph and LangSmith

The LangChain ecosystem expanded beyond the core library. LangGraph handles complex agent workflows as state machines (better for multi-step agents than linear chains). LangSmith provides observability — tracing, evaluation, and monitoring for LLM applications. The ecosystem addresses real needs, but the complexity of the full stack is a valid concern for teams that need to maintain and debug these systems in production.

相关概念

← 所有术语
← KV Cache Large Language Model →