China’s security chief calls for greater use of AI to predict terrorism, social unrest
Artificial intelligence can complete tasks with a ‘precision and speed unmatchable by humans’, official says
China’s domestic security and intelligence chief has called on the country’s police to use artificial intelligence to improve their ability to predict and prevent terrorism and social unrest.
Meng Jianzhu, head of the Communist Party’s central commission for political and legal affairs in charge of the country’s massive security and intelligence systems, pledged at a meeting in Beijing on Tuesday to use AI through machine learning, data mining and computer modelling to help stamp out risks to stability.
“Artificial intelligence can complete tasks with a precision and speed unmatchable by humans, and will drastically improve the predictability, accuracy and efficiency of social management,” Meng was quoted as saying by Chinese news website Thepaper.cn on Thursday.
Meng said security forces should study patterns that might be spotted in cases of terrorist attacks and public security incidents and build a data analysis model to improve the authorities’ ability to predict and stop such events taking place.
The idea has echoes of Steven Spielberg’s science fiction thriller Minority Report, in which the authorities use technology and genetic mutation to predict when and where crimes will take place.
Meng said the security services should break down any barriers in data sharing to enable the smooth integration of various systems.
He also called for renewed efforts to integrate all the footage from surveillance cameras around the country.
It is not the first time the security chief has stressed the need for high technology to strengthen the country’s sprawling surveillance network to help combat crime.
Meng said last month during a five-day inspection in the restive Xinjiang Uygur autonomous region in western China that large-scale use of cloud computing and AI, as well as analysis of “big data”, should be used to fight terrorism.
Zunyou Zhou, a counterterrorism law expert at Germany’s Max Planck Institute for Foreign and International Criminal Law, said Xinjiang – where the government has vowed to combat what it calls the rising threat of terrorism and extremism – could provide a testing ground for cutting edge technologies.
However, he warned that the government’s unbridled access to massive amounts of personal data could lead to abuse.
“China has no specific data protection law. The government can use personal data in any way they like, which could pose a huge threat to its citizens’ privacy,” he said, adding that legislation on the issue was urgently needed.
Maintaining social stability is one of the key tasks Beijing has set for its fledging AI industry.
The State Council unveiled a national artificial intelligence development plan in July, with the aim to raise the value of its core AI industries to 150 billion yuan (US$22.8 billion) by 2020 and 400 billion yuan by 2025.
The blueprint explicitly lays out AI’s role in helping to manage public security, such as developing products that can analyse video footage and identify suspects from the biometrics of their faces and bodies.
Some technologies, such as facial recognition, have already been put into use, albeit for detecting the perpetrators of only minor crimes. Mainland Chinese media reported this week that traffic police in Shanghai were now using facial recognition technology to identify cyclists and pedestrians caught on surveillance cameras violating traffic regulations.
The government has also directed one of the country’s largest state-run defence contractors, China Electronics Technology Group, to develop software to collate data on citizens’ jobs, hobbies, consumption habits and other behaviour to help predict terrorist acts before they occur, Bloomberg reported last year.
“It’s crucial to examine the cause after an act of terror, but what is more important is to predict upcoming activities.” Wu Manqing, the chief engineer for the military contractor, was quoted as saying.