Alibaba Cloud — also known as Aliyun — launched in 2009 as the cloud computing division of Jack Ma's Alibaba Group, originally built to handle the insane traffic spikes of Singles' Day, the world's largest online shopping event. What started as internal infrastructure gradually became China's dominant public cloud provider, holding roughly a third of the domestic market. But the real story for the AI world begins in 2023, when Alibaba Cloud released the first Qwen (Tongyi Qianwen) models and committed to an aggressive open-weights strategy that would reshape the global landscape for accessible foundation models.
The Qwen series has evolved at a pace that caught Western labs off guard. Qwen 1.0 debuted in mid-2023 as a respectable but unremarkable large language model. Qwen 1.5, released in early 2024, narrowed the gap with frontier models significantly. Then Qwen 2 and Qwen 2.5 arrived in rapid succession, with the Qwen2.5-72B matching or beating Llama 3.1-70B on most benchmarks while being genuinely multilingual — not just English-with-some-Chinese, but strong across dozens of languages including Arabic, Japanese, Korean, and Southeast Asian languages that most Western models handle poorly. The Qwen team, led by Jinze Bai under Alibaba DAMO Academy, has also expanded into multimodal territory with Qwen-VL for vision-language tasks and Qwen-Audio, plus specialized coding variants (Qwen-Coder) and math models (Qwen-Math). By 2025, Qwen had become the de facto default open-weights model family for production use across much of Asia.
Alibaba Cloud's decision to release Qwen models under Apache 2.0 licenses was not altruism — it was a calculated move to build cloud market share. The playbook mirrors Meta's Llama strategy: give away the model, sell the compute. Every developer who fine-tunes Qwen on Alibaba Cloud's ModelScope platform, every startup that deploys Qwen through Alibaba's inference APIs, every enterprise that builds on Qwen and needs managed hosting — they all become potential cloud customers. The strategy is working particularly well in markets where US export controls on advanced chips make running frontier closed models from American providers either impractical or politically undesirable. Alibaba Cloud has positioned Qwen as the sovereign AI choice for countries looking to build domestic AI capabilities without dependency on OpenAI or Google.
The US export controls on advanced semiconductors to China, tightened repeatedly since October 2022, are the defining constraint for every Chinese AI lab — and Alibaba Cloud is no exception. Unable to purchase NVIDIA H100s or their successors, Alibaba has invested heavily in its own Hanguang 800 AI accelerator and has reportedly stockpiled older A100 chips before the bans took effect. The Qwen team has also become notably efficient with compute, achieving strong results with what appears to be significantly less training compute than comparable Western models. Whether this is genuine algorithmic efficiency or just less transparent reporting is debated, but the results speak for themselves: Qwen models consistently punch above their weight class.
Alibaba Cloud's position is unique among Chinese AI labs because it combines massive cloud infrastructure with frontier model development. Baidu has Ernie but a weaker cloud business. Tencent has cloud scale but less impressive models. Alibaba has both, plus the ModelScope platform (China's answer to Hugging Face) which has become the central hub for open-source AI in the Chinese ecosystem. The November 2023 decision to spin off and then cancel the IPO of the cloud division reflected internal tensions about how aggressively to invest in AI versus optimizing for profitability. By early 2025, Alibaba Group had committed to investing over $50 billion in cloud and AI infrastructure over the next three years — a signal that the AI-first strategy won out. For developers and businesses outside the US tech ecosystem, Alibaba Cloud and Qwen have become the most credible open alternative to the OpenAI-Microsoft axis.