Zubnet AI学习Wiki › GNN
Models

GNN

Graph Neural Network
设计用来操作图结构数据的神经网络 — 实体通过关系相连的数据(社交网络、分子、知识图谱、交通网络)。GNN 通过在相连节点之间传递消息学习,让每个节点基于邻居更新自己的表示。它们处理不能整齐放进网格(图像)或序列(文本)的数据。

为什么重要

不是所有数据都是文本或图像。社交网络、分子结构、推荐系统、欺诈检测网络、物流路线都天然是图结构。当实体间的关系和实体本身一样重要时,GNN 是对的工具。药物发现、社交网络分析、交通预测都依赖 GNN。

Deep Dive

The core operation in a GNN is message passing: each node collects information from its neighbors, aggregates it (sum, mean, or attention-weighted), and updates its own representation. After K rounds of message passing, each node's representation encodes information about its K-hop neighborhood. Graph Convolutional Networks (GCN), GraphSAGE, and Graph Attention Networks (GAT) are the most common architectures, differing in how they aggregate neighbor information.

Applications

Drug discovery: molecules are graphs (atoms = nodes, bonds = edges). GNNs predict molecular properties, binding affinity, and toxicity by learning from the molecular graph structure. Social networks: GNNs detect communities, predict links, and identify influential nodes. Recommendation systems: users and items form a bipartite graph, and GNNs predict which items a user would like based on graph structure. Fraud detection: transaction networks reveal suspicious patterns that GNNs can learn to identify.

Transformers as Graph Networks

There's a deep connection between Transformers and GNNs: self-attention can be viewed as message passing on a fully connected graph (every token attends to every other token). GNNs operate on sparse graphs (each node only connects to its actual neighbors). This connection has inspired Graph Transformers that combine the expressiveness of Transformers with the efficiency of sparse graph structures, and has led to cross-pollination of ideas between the two communities.

相关概念

← 所有术语
← GGUF Google DeepMind →