Advertisement

Google AI expert warns of ‘data poisoning’ as Chinese scientists work to ward off emerging threat

  • At Shanghai conference, researcher says attackers can ‘poison’ data sets through subtle tampering to critically harm artificial intelligence models
  • A team in China proposes method to bolster defences against these attacks, which can cause serious damage or security breaches

Reading Time:3 minutes
Why you can trust SCMP
6
Chinese scientists say special algorithms could be used to protect AI systems against data poisoning. Photo: Shutterstock
Zhang Tongin Beijing
A Google researcher has warned that attackers could disable AI systems by “poisoning” their data sets, and Chinese researchers are already working to come up with countermeasures to guard against this emerging threat.
At an AI conference in Shanghai on Friday, Google Brain research scientist Nicholas Carlini said that by manipulating just a tiny fraction of an AI system’s training data, attackers could critically compromise its functionality.

“Some security threats, once solely utilised for academic experimentation, have evolved into tangible threats in real-world contexts,” Carlini said during the Artificial Intelligence Risk and Security Sub-forum at the World Artificial Intelligence Conference, according to financial news outlet Caixin.

Advertisement

In one prevalent attack method known as “data poisoning”, an attacker introduces a small number of biased samples into the AI model’s training data set. This deceptive practice “poisons” the model during the training process, undermining its usefulness and integrity.

“By contaminating just 0.1 per cent of the data set, the entire algorithm can be compromised,” Carlini said.

Advertisement
Advertisement
Select Voice
Choose your listening speed
Get through articles 2x faster
1.25x
250 WPM
Slow
Average
Fast
1.25x