China’s open-source bet
Silicon Valley AI companies follow a familiar playbook: Keep the secret sauce behind an API, and charge for every drop. China’s leading AI labs are playing a different game: They ship models as downloadable “open-weight” packages. This lets developers adapt the models and run them on their own hardware to build products without negotiating a commercial relationship with a US gatekeeper.
This strategy went mainstream after DeepSeek open-sourced its R1 reasoning model in January 2025. That model matched the performance of the best American systems, reportedly at a fraction of the cost. On raw capability, the gap between US and Chinese labs seemed to have suddenly narrowed. But China also won something subtler and stickier: goodwill with developers. Giving away what your rivals charge for has a way of doing that.
China rode that momentum hard. A year after DeepSeek’s release, there’s now a cohort of Chinese open-source giants following the same blueprint, including Z.ai (formerly Zhipu), Moonshot, Alibaba’s Qwen, and MiniMax. They’re all racing to release more capable models, and they are closing in on US rivals at a pace few anticipated.
That matters because AI hype is dying down, and companies are shifting focus from buzzy pilots to deployment and integration, where cheaper and more customizable tools tend to win. Chinese pricing means developers with limited budgets can experiment more, and open weights mean they can adapt models without asking for permission.
A study by researchers at MIT and Hugging Face found that Chinese open-weight models accounted for 17.1% of global AI model downloads over the year ending in August 2025. That narrowly surpassed the US share of 15.86%—the first time China had led in this metric. And Hugging Face data from last month shows that Alibaba’s models, including its Qwen family, now have the most user-generated variants—more than models from Google and Meta combined.
The open-source ideal, though, runs headlong into some hard realities. Chinese models carry the imprint of China’s content moderation regime and are trained to avoid outputs that conflict with government policy. And in February, Anthropic accused several Chinese labs of illicitly extracting capabilities from Claude through distillation, a process where you use one model’s outputs to train another. That’s a standard industry practice, but top US firms like OpenAI and Anthropic claim that Chinese companies have used fraudulent methods to do it.
Despite pushback from the West, much of the Global South is embracing Chinese models, seeing open-source as a path to AI sovereignty. Singapore’s government-backed AI Singapore program chose Alibaba’s Qwen over Meta’s Llama to build its latest regional model on; last year, Malaysia announced that its sovereign AI ecosystem would run on DeepSeek. Meanwhile, founders from Nairobi to São Paulo to San Francisco are building on Chinese foundations.
US tech CEOs believe the best models should stay proprietary, partly so they can recoup enormous training costs and partly out of concern that powerful frontier models could be weaponized. Chinese labs, for their part, are not purely idealistic: Open-source is not only free advertising but also a shrewd workaround. Without access to cutting-edge chips restricted by US export controls, releasing models openly accelerates the cycle of external feedback and contributions that compensates for constrained compute. The more developers build on your models, the stronger your ecosystem becomes, as Linux and Android have shown. That adoption naturally translates into API usage and revenue.
Either way, open-source models have already made AI’s future more multipolar than Silicon Valley expected. And there’s no way of turning back.






