Zubnet AI學習Wiki › CoreWeave
公司

CoreWeave

一個完全圍繞 GPU 運算為 AI 工作負載建構的專門雲端供應商。CoreWeave 營運大規模 NVIDIA GPU(H100、H200)叢集,已經拿到數十億融資和債務融資建 GPU 資料中心。主要 AI 公司(包括 Microsoft 和幾家 AI 實驗室)用 CoreWeave 做大規模訓練和推理。

為什麼重要

CoreWeave 是 AI 領域成長最快的基礎設施公司之一,押注專門的 GPU 雲端供應商能在 AI 工作負載上勝過通用超大規模雲。他們的聚焦讓 GPU 使用率更高、有目的建構的網路(訓練叢集的 InfiniBand)、價格在 GPU 密集工作上比 AWS/GCP 便宜 30–50%。

Deep Dive

CoreWeave's infrastructure is purpose-built for AI: NVIDIA GPU clusters with InfiniBand networking (essential for distributed training), high-bandwidth storage (for loading large datasets and checkpoints), and Kubernetes-based orchestration optimized for GPU workloads. This specialization lets them achieve higher GPU utilization rates than general-purpose clouds, translating to better pricing.

The Bet

CoreWeave has raised over $10B in equity and debt — a massive bet that GPU cloud demand will continue growing. The risk: if AI training demand plateaus or shifts to custom chips (TPUs, Trainium, Groq), their GPU-centric infrastructure becomes less valuable. The opportunity: if GPU demand continues its exponential growth (which most industry observers expect for at least the next several years), CoreWeave is positioned to capture a significant share of a very large market.

相關概念

← 所有術語
ESC