
US-China tech war: Beijing-funded AI researchers surpass Google and OpenAI with new language processing model
- The WuDao 2.0 natural language processing model had 1.75 trillion parameters, topping the 1.6 trillion that Google unveiled in a similar model in January
- China has been pouring money into AI to try to close the gap with the US, which maintains an edge because of its dominance in semiconductors
The WuDao 2.0 model is a pre-trained AI model that uses 1.75 trillion parameters to simulate conversational speech, write poems, understand pictures and even generate recipes. The project was led by the non-profit research institute Beijing Academy of Artificial Intelligence (BAAI) and developed with more than 100 scientists from multiple organisations.
Parameters are variables defined by machine learning models. As the model evolves, parameters are further refined to allow the algorithm to get better at finding the correct outcome over time. Once a model is trained on a specific data set, such as samples of human speech, the outcome can then be applied to solving similar problems.
In general, the more parameters a model contains, the more sophisticated it is. However, creating a more complex model requires time, money, and research breakthroughs.

01:46
AI instructors teach student drivers in Shanghai how to get behind the wheel
In an era of fast-evolving AI models, BAAI researchers claim to have broken the record set in January by Google’s Switch Transformer, which has 1.6 billion parameters. OpenAI’s GPT-3 model made waves last year when it was released with 175 billion parameters, making it the largest NPL model at the time.
WuDao 2.0 covers both Chinese and English with skills acquired by studying 4.9 terabytes of images and texts, including 1.2 terabytes each of Chinese and English texts. It already has 22 partners, including smartphone maker Xiaomi, on-demand delivery service provider Meituan and short-video giant Kuaishou.
“These sophisticated models, trained on gigantic data sets, only require a small amount of new data when used for a specific feature because they can transfer knowledge already learned into new tasks, just like human beings,” said Blake Yan, an AI researcher from Beijing.
“Large-scale pre-trained models are one of today’s best shortcuts to artificial general intelligence,” he added, using a term for the hypothetical ability of a machine to learn any task that a human can.
Such models act as strategic infrastructure for AI development, said Zhang Hongjiang, chairman of BAAI, on Monday while announcing the project. They are like power plants using data as their fuel, he added, generating intelligence to support AI applications.
Ex-Google CEO says US must counter China in semiconductors
BAAI is funded by the Beijing government, which put 340 million yuan (US$53.3 million) into the academy in 2018 and 2019 alone, pledging to continue its support, a Beijing official said in a 2019 speech.
A March report by the US National Security Commission on Artificial Intelligence, which includes former Google CEO Eric Schmidt as a chairman along with representatives from other major tech firms, identified China as a potential threat to American AI supremacy. The Rand Corporation think tank also warned last year that Beijing’s focus on AI has helped it substantially narrow its gap with the US, attributing the country’s “modest lead” to its dominant semiconductor sector.
