How Alibaba builds its most efficient AI model to date
The new mechanism has sparked excitement among AI experts, who are increasingly concerned about the rising costs of scaling up models

A technical innovation has allowed Alibaba Group Holding, one of the leading players in China’s artificial intelligence boom, to develop a new generation of foundation models that match the strong performance of larger predecessors while being significantly smaller and more cost efficient.
Despite its compact size, Qwen3-Next-80B-A3B is among Alibaba’s best models to date, according to developers. The key lies in its efficiency: the model is said to perform 10 times faster in some tasks than the preceding Qwen3-32B released in April, while achieving a 90 per cent reduction in training costs.
Emad Mostaque, co-founder of the UK-based start-up Stability AI, said on X that Alibaba’s new model outperformed “pretty much any model from last year” despite an estimated training cost of less than US$500,000.
For comparison, training Google’s Gemini Ultra, released in February 2024, cost an estimated US$191 million, according to Stanford University’s AI Index.
