Panthalassa, an Oregon-based startup, raised $140 million led by Peter Thiel to build floating AI data centers powered by ocean waves. The Ocean-3 prototype is a self-contained orb โ€” no anchor, no power cable to shore โ€” that converts wave motion into electricity via an internal turbine, runs onsite AI compute, and transmits results via LEO satellites. Pilot deployments are slated for August 2026, with commercial systems planned for 2027. The economic claim if it scales: $0.02 per kWh, which would be roughly half of typical onshore data-center power costs ($0.04-0.10/kWh). This is the kind of infrastructure bet that's easy to dismiss until you remember that grid capacity and water for cooling are now the actual constraints on AI compute scale-out, not GPUs themselves.

The mechanism is straightforward in concept: as the orb rides the swell, water in an internal tube is forced upward into the chamber, then through a turbine, spinning generators. The hard engineering questions aren't in the wave-power physics โ€” that's well-trodden โ€” they're in the system integration. Per-orb power output is sub-MW class based on the prototype dimensions, so commercial scale means deploying many orbs, which means station-keeping (untethered, GPS-corrected position), inter-orb power and data networking, satellite-uplink bandwidth aggregation, and ocean-water cooling loops that don't foul or corrode over operational lifetimes. LEO satellite uplink (Starlink-class) caps bandwidth at ~100 Mbps per terminal โ€” workable for batch training jobs and async inference, terrible for real-time interactive workloads where latency matters. Cooling via ocean water is the architectural win: free continuous heat exchange at 4-15ยฐC surface temperature depending on latitude, much better than air-cooled onshore facilities that need active chillers in summer.

The ecosystem read: 2025-2026 has been the year grid constraints became visible to AI builders. Hyperscalers are signing 20-year nuclear PPAs, neoclouds are racing for any megawatt they can find, and gas turbine lead times stretched to 4-5 years. Ocean-based compute is one of the more credible "what if we routed around the grid entirely" answers. It won't replace the onshore footprint anytime โ€” latency-sensitive inference, regulatory-bound data residency, and most enterprise workloads stay onshore โ€” but for batch training, frontier-model pretraining runs that consume gigawatt-months, and async inference with no latency budget, an offshore floating capacity that doesn't compete for grid hookups or cooling water has a real economic argument. The $0.02/kWh number assumes scale; the prototype phase will show whether the orb economics hold at unit-1 cost or only at unit-10000. Critical path: jurisdictional/regulatory framework for compute-bearing structures in international waters and US territorial seas, plus the question of who owns and reaches the data when an orb breaks loose.

Practical move: this isn't a Q3 procurement decision for any builder, but it's worth tracking as a forward signal on where compute supply expands. If you run training infrastructure, watch which workloads start migrating offshore โ€” the early adopters will be hyperscalers running pretraining batches that don't care about latency, where the savings on grid + cooling + land amortize against the capex of the orbs. If you run inference, the LEO-uplink latency floor (~50-100ms LEO hop + propagation) keeps you onshore for now. The longer-term watch is whether neoclouds or hyperscalers pick up Panthalassa's platform vs build their own โ€” Microsoft has done floating data center experiments before (Project Natick, sealed undersea pods), but that was sealed/anchored. Panthalassa's untethered design is the new architectural bet, and whether it survives an Oregon winter at the Ocean-3 pilot site is the actual question for the next twelve months.