Zubnet AI學習Wiki › Knowledge Graph
基礎

Knowledge Graph

KG, Ontology
知識的結構化表示,作為由關係(邊)連接的實體(節點)網路。「巴黎(實體)是(關係)法國(實體)的首都。」知識圖譜以支援推理、查詢、發現的方式編碼事實。Google 的知識圖譜、Wikidata、以及企業知識圖譜驅動搜尋、推薦、資料整合。

為什麼重要

知識圖譜透過提供 LLM 可查詢而非幻覺的結構化、可驗證事實來補充 LLM。當 LLM 把知識隱式存在權重裡(有時是錯的),知識圖譜把它顯式存在可驗證、可更新的三元組裡。LLM(理解自然語言)和 KG(錨定事實)的組合對企業 AI 是一個強大的模式。

Deep Dive

A knowledge graph stores knowledge as (subject, predicate, object) triples: (Albert Einstein, born_in, Ulm), (Ulm, located_in, Germany). These triples form a graph where entities are nodes and relationships are edges. You can traverse the graph to answer multi-hop questions: "Where was the birthplace of the person who developed general relativity?" follows Einstein → born_in → Ulm → located_in → Germany.

KGs + LLMs

The integration of knowledge graphs with LLMs takes several forms: using KGs as a source for RAG (retrieve relevant subgraphs for a query), using LLMs to populate KGs (extract entities and relationships from text), and using KGs to verify LLM outputs (check stated facts against the graph). GraphRAG (Microsoft) uses LLMs to build a knowledge graph from documents, then queries that graph for more structured retrieval than pure vector search.

Building Knowledge Graphs

Constructing a knowledge graph requires: entity extraction (identifying people, places, concepts in text), relation extraction (identifying how entities relate), entity resolution (recognizing that "NYC," "New York City," and "The Big Apple" are the same entity), and schema design (defining what types of entities and relationships exist). LLMs have made each of these steps cheaper and more accurate, democratizing KG construction for organizations that previously couldn't afford the manual effort.

相關概念

← 所有術語
← Knowledge Editing KV Cache →