Zubnet AI學習Wiki › GNN
Models

GNN

Graph Neural Network
設計用來操作圖結構資料的神經網路 — 實體透過關係相連的資料(社群網路、分子、知識圖譜、交通網路)。GNN 透過在相連節點之間傳遞訊息學習,讓每個節點基於鄰居更新自己的表示。它們處理不能整齊放進網格(影像)或序列(文字)的資料。

為什麼重要

不是所有資料都是文字或影像。社群網路、分子結構、推薦系統、詐欺偵測網路、物流路線都天然是圖結構。當實體間的關係和實體本身一樣重要時,GNN 是對的工具。藥物發現、社群網路分析、交通預測都依賴 GNN。

Deep Dive

The core operation in a GNN is message passing: each node collects information from its neighbors, aggregates it (sum, mean, or attention-weighted), and updates its own representation. After K rounds of message passing, each node's representation encodes information about its K-hop neighborhood. Graph Convolutional Networks (GCN), GraphSAGE, and Graph Attention Networks (GAT) are the most common architectures, differing in how they aggregate neighbor information.

Applications

Drug discovery: molecules are graphs (atoms = nodes, bonds = edges). GNNs predict molecular properties, binding affinity, and toxicity by learning from the molecular graph structure. Social networks: GNNs detect communities, predict links, and identify influential nodes. Recommendation systems: users and items form a bipartite graph, and GNNs predict which items a user would like based on graph structure. Fraud detection: transaction networks reveal suspicious patterns that GNNs can learn to identify.

Transformers as Graph Networks

There's a deep connection between Transformers and GNNs: self-attention can be viewed as message passing on a fully connected graph (every token attends to every other token). GNNs operate on sparse graphs (each node only connects to its actual neighbors). This connection has inspired Graph Transformers that combine the expressiveness of Transformers with the efficiency of sparse graph structures, and has led to cross-pollination of ideas between the two communities.

相關概念

← 所有術語
← GGUF Google DeepMind →