Crackdown on Chinese accounts shows US social media giants becoming ‘more proactive’
- Speed and coordination of response by Twitter, Facebook and YouTube to alleged disinformation campaign on Hong Kong protests ‘not seen before’
- Pressure from US, European Council and others has pushed them to do more to monitor politically motivated and state-sponsored activities, analysts say
The swift crackdown on an alleged disinformation campaign linked to Hong Kong’s anti-government protests shows social media platforms Twitter, Facebook and YouTube are becoming more proactive about dismantling influence operations, according to analysts.
They cited the contrast between the speedy, synchronised move by the US companies and their track record of combating such activity on their platforms.
Pressure from the United States, the European Council and others to root out political disinformation, and the release of former special counsel Robert Mueller’s report in April detailing Russia’s use of social media to meddle in the 2016 US elections, had pushed the platforms to do more to monitor politically motivated and state-sponsored activities, they said.
“We haven’t seen this kind of speed and coordination before [from the platforms], it’s always been dragging and kicking social media companies to do something – here they seem to have been more proactive. The explanation has to be more than technical, and it has to be more than geopolitical,” said David Fidler, adjunct senior fellow for cybersecurity at the Council on Foreign Relations in New York.
He noted that the move was made within the context of a strong US agenda to counter China’s global influence and cyber capabilities.
“If you connect improved business understanding and technical capabilities with a long-standing monitoring of these companies’ relationships with China, you can see that the groundwork was in place for a swifter response than we’ve seen on anything else,” Fidler said.
Statements from Facebook and Google also noted coordination between the companies in their investigations. Facebook confirmed it had acted on a tip from Twitter, while Google confirmed exchanging information with “industry partners” when investigating threats.
That level of coordination was indicative of how the companies had “scaled up” their investment around security and detection of platform manipulation after the 2016 US elections, according to Jake Wallis, senior analyst at the Australian Strategic Policy Institute’s International Cyber Policy Centre.
The platforms have come under intense political and public scrutiny for not reacting adequately to that misinformation campaign until the election was over.
Wallis said the platforms had been “learning from experience” and would likely direct more attention to places going through politically critical moments.
“What was particularly notable from Twitter’s and Facebook’s announcements was just how forward leaning they were prepared to be in attributing this activity to a state link. I don’t think that’s a step they would have taken lightly, all three would wish to grow some sort of market in China,” he said.
Detecting that the accounts were linked to the Chinese government would have taken an advanced combination of algorithmic and human investigation.
These are capabilities that the companies have been slowly building since the Islamic State’s social media campaigns starting in 2014 raised a red flag on how the platforms are also fertile ground for extremism, misinformation and manipulation, according to Fidler.
Automated detective work from algorithms identify accounts not behaving in the way a human would and establish connections between accounts with similar behaviours, to measure their scale. Human employees would then connect the dots based on an understanding of the political context surrounding the mechanised findings.
“One of the key factors that the platforms would be drawing on here [is asking] why would a network of accounts behave in that way, what are they trying to drive at? What message are they trying to amplify, how is that message being targeted?” Wallis said.
“It’s really a combination of technical signals, data points, the geopolitical, strategic understanding that you overlay on that technical data.”
In the case of Hong Kong, it may not have been particularly hard for the platforms to discern what was happening, compared with Russia’s US-focused misinformation, which was established over a matter of years, analysts agreed.
“This does not have the hallmarks of a long-running effectively planned influence operation similar to what we saw from Russia in the 2016 elections,” Wallis said. His team’s analysis of the files released by Twitter found traceable accounts and “aged accounts” that appeared they might have been for hire.
Another red flag was what Twitter described as activity from “specific unblocked IP addresses”, which in the mainland would indicate that they were special government-approved channels that did not require the use of a virtual private network to access a website outside China’s Great Firewall. Twitter, Facebook, and Google are all blocked in mainland China.
Andre Oboler, head of the Online Hate Prevention Institute in Sydney and a senior lecturer in cybersecurity at La Trobe University’s Law School in Melbourne, said the platforms’ swift response to the disinformation campaign showed how far they had come even since the 2016 US elections.
But he said their findings that it was a state-linked activity showed how significant a security threat state actors were to the platforms. That was a newer trend than misinformation campaigns or other nefarious activities launched on the platforms by non-state actors, hacking syndicates or special interest groups.
Oboler added that after the latest attempts to sway opinion on Hong Kong, rooting out the work of state actors on their platforms “is now one of the major ongoing threats they need to work on”.