Zubnet AI学习Wiki › CoreWeave
公司

CoreWeave

一个完全围绕 GPU 计算为 AI 工作负载构建的专门云供应商。CoreWeave 运营大规模 NVIDIA GPU(H100、H200)集群,已经拿到数十亿融资和债务融资建 GPU 数据中心。主要 AI 公司(包括 Microsoft 和几家 AI 实验室)用 CoreWeave 做大规模训练和推理。

为什么重要

CoreWeave 是 AI 领域增长最快的基础设施公司之一,押注专门的 GPU 云供应商能在 AI 工作负载上胜过通用超大规模云。他们的聚焦让 GPU 利用率更高、有目的构建的网络(训练集群的 InfiniBand)、价格在 GPU 密集工作上比 AWS/GCP 便宜 30–50%。

Deep Dive

CoreWeave's infrastructure is purpose-built for AI: NVIDIA GPU clusters with InfiniBand networking (essential for distributed training), high-bandwidth storage (for loading large datasets and checkpoints), and Kubernetes-based orchestration optimized for GPU workloads. This specialization lets them achieve higher GPU utilization rates than general-purpose clouds, translating to better pricing.

The Bet

CoreWeave has raised over $10B in equity and debt — a massive bet that GPU cloud demand will continue growing. The risk: if AI training demand plateaus or shifts to custom chips (TPUs, Trainium, Groq), their GPU-centric infrastructure becomes less valuable. The opportunity: if GPU demand continues its exponential growth (which most industry observers expect for at least the next several years), CoreWeave is positioned to capture a significant share of a very large market.

相关概念

← 所有术语
ESC