Advertisement
Science
ChinaScience

Over 10 years in the making: China launches 2,000km-wide AI computing hub

Implications are revolutionary for extremely high real-time demands such as AI large model training, telemedicine, director Liu Yunjie says

Reading Time:3 minutes
Why you can trust SCMP
9
Chinese scientists say their innovation means computing centres separated by distance but linked by optical network could work together almost as efficiently as a single giant computer. Image: Shutterstock
Zhang Tongin Beijing

China has activated what may be the world’s largest distributed AI computing pool, alongside a high-speed data network developed over more than a decade, according to a state newspaper.

The official Science and Technology Daily said the optical network connected distant computing centres, so they could work together almost as efficiently as a single giant computer.

The 2,000km-wide (1,243-mile) computing power pool formed via this network could achieve 98 per cent of the efficiency of a single data centre, Liu Yunjie, a member of the Chinese Academy of Engineering and chief director of the project, was quoted as saying in the report last Thursday.

Behind the Nexperia crisis and China-Netherlands tech tensions | China Future Tech webinar

Behind the Nexperia crisis and China-Netherlands tech tensions | China Future Tech webinar

China’s top computing centres are scattered across the country but this pool would allow them to operate as a unified system, working together seamlessly to fast-track the development of the most powerful AI models and other cutting-edge technology.

Advertisement

“The implications of this dedicated data highway are revolutionary for scenarios with extremely high real-time demands, such as AI large model training, telemedicine and the industrial internet,” Liu told the daily.

The Future Network Test Facility (FNTF) is China’s first major national science and technology infrastructure project in the information and communication sector. After more than a decade of development, it officially began operations on December 3.

Advertisement
The facility’s role in advancing the development of artificial intelligence is particularly significant, offering substantial savings in both time and economic cost, according to Liu.

“Training a large model with hundreds of billions of parameters typically requires over 500,000 iterations. On our deterministic network, each iteration takes only about 16 seconds. Without this capability, each iteration would take over 20 seconds longer – potentially extending the entire training cycle by several months,” he said.

Advertisement
Select Voice
Choose your listening speed
Get through articles 2x faster
1.25x
250 WPM
Slow
Average
Fast
1.25x