Advertisement
Advertisement
US-China relations
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
Presidents Joe Biden and Xi Jinping have affirmed the need to address the “risk of advanced AI systems and improve AI safety”. Photo: AP

China, US see the risks AI systems can bring. But can they see past military tech rivalry?

  • US-China consensus on regulating the military use of AI mentions no specifics, amid challenge posed by the lack of a common definition for such systems
  • Mutual vulnerability could still push the rival powers to create a common set of binding regulations, observers say
Even as China and the US race for supremacy in the military use of artificial intelligence, their presidents have recognised the need to “address the risk of advanced AI systems and improve AI safety”.
Xi Jinping and Joe Biden, meeting for a rare summit in San Francisco last month, agreed to work together on regulating the military application of AI.

However, there were no specifics, and differences remain between the rival powers.

“I don’t know if they can go beyond what has already been agreed on at the international level,” said Dr Guangyu Qiao-Franco, an assistant professor at Radboud University in the Netherlands who specialises in politics and AI.

In 2019, China, the United States and 96 other countries devised a set of guidelines on the use of lethal autonomous weapon systems, or LAWS, which can be enhanced with AI. Meeting in Geneva, they agreed that humans must remain responsible for using such systems, their development should comply with international humanitarian law and that they should be fully tested before being deployed. The guidelines, however, are not legally binding.

“I’m not optimistic, to be honest, because [the US and China] are still too divided,” Qiao-Franco said. “I feel like the US has this incentive to limit China’s technology development. And then also China, of course, wants to increase its technology independence and wants to reduce those technology choke points.”

Australia, US, UK to test AI system to track Chinese submarines in the Pacific

The US has warned that Chinese AI leadership in emerging applications, such as computer vision and autonomous underwater vehicles, could blunt the US’ edge. It has also flagged concerns about China’s military-civil fusion strategy on hi-tech development.

The military can use AI technologies for many purposes. The main concern was how civilian impact could be minimised when AI was used in combat, said Neil Davison, a senior scientific and policy adviser at the International Committee of the Red Cross.

AI could use image recognition to identify targets, be they people or sites of interest. It can analyse data to help humans make better decisions on the battlefield, and find weak points to launch cyberattacks capable of disrupting essential systems used by hospitals and other civilian infrastructures.

Enemy nations can also use generative AI, some of it already widely available for free on systems such as ChatGPT and Stable Diffusion, to churn out disinformation for a tactical advantage, such as misleading the civilian population about an imminent attack.

States should watch these areas, Davison said, adding that regulation should focus on specific applications of AI, not only stipulate general principles on the technology itself.

Beijing and Washington have separately endorsed the view that keeping humans in control of AI systems is key to their use in the military.

China emphasised this in its 2021 position paper on the military application of AI, calling for AI weapon systems to be always placed under human control as a human could always suspend their operation at any time.

Washington has criticised terms such as “meaningful human control” for being vague, instead preferring “within a responsible human chain of command”. Disagreement over language such as this has precluded consensus in United Nations discussions over LAWS.

But international agreements to set parameters for the military use of AI rarely come with concrete mechanisms for their regulation.

The US last month led 46 other countries, mainly key allies and partners, in pledging “responsible” use of military AI applications, such as setting up transparency standards and adequate training for users. China was notably absent from the declaration, which mentioned no mechanism for its enforcement.

The first challenge is the lack of a definition. Countries are still debating what LAWS means. Without a common definition, regulating or even banning them with a treaty, for example, would be impossible.

That debate had been split largely between developed and developing countries, Qiao-Franco said. The richer states can invest heavily in research and development and want to ensure that restrictions on LAWS development are narrowly defined, so that they can develop more precise and stable AI-led weapons and equipment.

Poorer states, who lack similar resources, argue for a wider definition that is harsher on LAWS because they see themselves as their potential victims or targets.

03:12

Xi Jinping, Joe Biden hold talks on sidelines of Apec summit to ease strained US-China ties

Xi Jinping, Joe Biden hold talks on sidelines of Apec summit to ease strained US-China ties

China’s position is unique. Beijing wants to be the voice of the Global South, with Xi declaring China will always be a developing country. But on AI research, it has invested substantially and presented a narrow definition of LAWS, characterised by the systems’ lethality, absence of human intervention during execution, impossibility for termination, indiscriminate effect, ability to evolve, and expansion of functions exceeding human expectations.

The Pentagon, meanwhile, defines them as weapon systems that can “select and engage targets without further intervention by a human operator” once activated.

“It seems to have adopted this lone wolf approach,” Qiao-Franco said of China. “It’s a position that is quite different from … any other country so far.”

Davison said: “Ultimately, there remains interest, even for major military powers, to set constraints.

“The advantages that might be gained by certain uses of military AI might also … leave vulnerabilities to their societies, their militaries, their own soldiers.”

That mutual vulnerability could push China and the US to create a common set of binding regulations on the military application of AI.

The two countries have been holding backchannel meetings on AI, including between Tsinghua University’s Centre for International Security and Strategy and Washington-based think tank the Brookings Institution.

A good starting point would be a joint US-China statement on the importance of human control in making decisions over nuclear arms, said Dr Lora Saalman, a senior researcher at Stockholm International Peace Research Institute.

But devising an AI version of the Nuclear Non-Proliferation Treaty (NPT), which aims to confine such weapons to only five countries, would be difficult, she said.

“The lack of ability to verify compliance, combined with the overall speed of AI technological advancement, would likely derail any efforts to conclude an AI NPT,” Saalman said.

AI deal shows China, US can cooperate on tech rules despite rivalry: analysts

Countries other than the “nuclear five” – China, France, Russia, the US and the United Kingdom – are also believed to have developed nuclear capabilities, including India, Pakistan, Iran and North Korea.

Saalman said the US and China could start with a joint statement on the importance of “human control” in nuclear decision-making.

France, the US and the UK have committed to maintaining human control in AI. Though China does not have an identical stance, the principle could evolve into concrete nuclear policy, she said.

On Friday, China joined the US, the UK, Australia and 24 other countries as well as the European Union in signing the “Bletchley Declaration” aimed at boosting global cooperation on AI, recognising its risks and the importance of reducing bias, and increasing human oversight.

Saalman said given China’s “no first use” commitment on nuclear weapons, and expectations of a formal US-China channel to be set up for AI talks following the Biden-Xi summit, “there may be enough of a foundation to achieve a joint statement on human control in nuclear decision-making”.

22