Orbital Chenguang, the Beijing Astro-future Institute of Space Technology spinout, secured 57.7 billion yuan (about $8.45B) in strategic credit lines from twelve Chinese banks this month, including Bank of China, Agricultural Bank of China, CITIC Bank, and China Merchants Bank, alongside an A1 equity round backed by Haisong Capital, CITIC Securities Investment, Cathay Capital and others. The plan is a 16-spacecraft constellation in sun-synchronous dawn-dusk orbit at 700-800 km altitude, laser-linked, with the China Aerospace Science and Technology Corporation (CASC) targeting 1 GW of aggregate space-based compute capacity during the 15th Five-Year Plan window of 2026-2030. The first test satellite launches sometime in 2026; the full constellation operates by 2030; large-scale follow-on capacity is planned out to 2035. The credit-line scale and the state alignment, both via CASC and the Five-Year Plan, mean this is not the usual space-compute press release; it has roughly the financial commitment of a top-five terrestrial AI training cluster and explicit policy backing.
The technical premise of dawn-dusk sun-synchronous orbits is the part that actually distinguishes orbital data centers from the terrestrial alternative, and worth understanding precisely. A satellite in dawn-dusk SSO crosses the equator near the day/night terminator, which means it is in continuous sunlight for nearly all of its orbit (Earth's shadow rarely intercepts it), while the satellite's radiator face points to deep space at roughly 3 K. The result is theoretically continuous solar input on the order of 1.36 kW per square meter at the panel face and effectively unlimited passive cooling capacity, which removes the two largest constraints on terrestrial data centers: power supply and heat rejection. The trade-offs are also concrete. Launch cost amortised over satellite lifetime is the primary cost driver and remains higher than terrestrial compute on a $/FLOP basis at current Chinese launch prices. Latency from 700 km LEO is about 4-5 milliseconds one-way to ground, which is fine for batch training and asynchronous workloads but rules out latency-sensitive inference and real-time agent loops. Hardware is unrepairable; a chip failure on orbit means the satellite is degraded. Radiation hardening costs both money and FLOPS-per-watt because rad-hard processors lag commercial nodes by 1-2 generations.
The broader implication is that the orbital-data-center thesis has now crossed from speculative-startup territory into state-financed infrastructure for the first time. Starcloud, Lonestar Data Holdings, and Orbital Compute (a16z-funded) have raised seed-to-Series-A rounds in the US in the last 18 months, and Google, Amazon, and xAI have all signalled interest, but none of them has GW-scale state backing. Chenguang's $8.45B credit envelope is roughly the size of all combined Western space-compute funding to date, plus an order of magnitude. Strategically, this is consistent with China's broader posture on AI infrastructure: the same Five-Year Plan period explicitly targets self-sufficiency in domestic chips, sovereign AI clouds, and dual-use space infrastructure, and orbital compute is a natural fit for all three because launch and orbital operations are insulated from US export-control regimes that govern terrestrial chip supply. Whether the technical premise pays off at GW scale is genuinely unresolved, but the policy commitment makes it a load-bearing variable in any model of where global AI compute capacity sits in 2030.
For builders watching the compute landscape, the actionable read is to update three priors. First, the cost curve of space compute is now moving along a state-backed timeline rather than a venture-funded one for the leading Chinese effort, which means the relevant benchmark for whether orbital becomes economically viable is not "do startups make their next round" but "does the 15th Five-Year Plan ship its 1 GW target." That is a stronger commitment, with downside political consequences for missing that the venture market does not impose. Second, the latency profile of orbital compute is fixed by physics; if your AI workload tolerates 5-10ms additional round-trip and is heavy on training rather than inference, orbital becomes a real option in the second half of this decade, while latency-sensitive workloads stay terrestrial indefinitely. Third, the geopolitical layer matters: a GW of Chinese-controlled orbital AI compute is not subject to US chip export controls, since the silicon is launched once and not subject to ongoing customs review, and is positioned to serve customers in regions where Western data residency rules do not reach. None of this is a 2026 product story for any builder shipping today; all of it changes the 2028-2030 compute supply picture in ways worth tracking.
