This article originally appeared on ABACUS If you ever find yourself browsing Twitter these days, you might stumble across actress Keira Nightley obsessively sharing news about China. But not really Keira Knightley. It could be a bot using her photo as a profile picture. This is just one of the signs that a Twitter account is fake and possibly part of a disinformation campaign, according to researchers. This is what happened with state-backed campaigns from Russia during the 2016 US election and China during the ongoing anti-government protests in Hong Kong. But as the Keira Knightley picture might indicate, many of these accounts are surprisingly unconvincing. But that might not matter. “The main thing to remember about bots is that they are not targeting individual users, they are targeting Twitter's algorithms,” said Donara Barojan, a disinformation expert at Astroscreen. The same is true for any other social networks where this practice is prevalent, she added The main objective of these bots is to get something to trend. The Mandarin word for “thugs,” for example, was used to describe Hong Kong protesters. Once a hashtag is trending, it could be seen by millions. Those who are against the protests might feel validated while supporters feel outnumbered. And those who are undecided might feel the need to embrace the most popular view, Barojan said. This explains why spam on social media isn’t going anywhere. Just recently, Australian Strategic Policy Institute (ASPI) researcher Elise Thomas uncovered what she described as a “ massive spambot network in the making ” designed to amplify pro-Beijing messages about the Xinjiang region . China has recently been seeking to justify its treatment of the local Muslim Uygur population, where more than a million Uygurs and other Muslim minority groups have reportedly been sent to detention camps. A picture of a celebrity is just one sign an account is fake. Consistent naming patterns, tweeting random quotes or saying hello to itself are also indicators, Thomas said. But these types of accounts seem relatively unsophisticated compared with past disinformation campaigns. The disinformation campaign during the 2016 US election included bots that seemed more like real people. They had carefully composed personas and appeared to be more effective in spreading fake news and conspiracy theories that kept the US talking for years. Twitter bots said to be originating from China don’t seem overly concerned with the appearance of authenticity. In August, Twitter suspended 936 accounts originating from China for what it said was a “coordinated state-backed operation” to sow political discord in Hong Kong. Some of these accounts were more convincing than fake celebrities and were operated by real people. But many were also recycled accounts. “A more sophisticated campaign would either create its own dedicated accounts or at least clean up the timelines to delete those earlier spam tweets,” Thomas said. In short, they would try to mimic an authentic Twitter user, making them more difficult to detect. The bots that targeted the anti-government protests would be obvious to anyone who scrolled back far enough through the account timeline. A report from ASPI showed that the network relied on fake accounts that had previously been used for a wide range of advertising and spam-related purposes, sometimes tweeting in completely different languages. Some were used to peddle porn sites, cryptocurrencies, or even hot tubs and bacon . So how could one hastily composed fake Keira Knightley make us change our opinions on Chinese politics? “When most users browse trending hashtags, they spend little or no time analyzing individual users contributing to that specific conversation unless they have a pre-existing reason to be suspicious,” Thomas said. Not everyone spends time doing their due diligence on Twitter. Identifying bots conclusively is less straightforward than it seems. A Twitter handle with eight digits could be a sign of a suspicious account or just someone who was too lazy to change their handle from a system-generated default. But in the end, it might not even matter if people realize they’re reading posts from bots. Some recent research has cast doubt on the effectiveness of Twitter disinformation campaigns. Researchers from Duke University published a paper in November on how Twitter accounts operated by the Russian Internet Research Agency influenced political attitudes and behaviors. Although the researchers said the study didn’t include enough samples for a definitive conclusion, they found no evidence that the Twitter accounts made a substantial impact on political attitudes and behaviors. Even if pro-Beijing bots are effective, they don’t seem to be convincing Hongkongers themselves, who overwhelmingly voted for pro-democracy candidates in the recent district council elections. And many in Hong Kong still prefer to stick to Facebook and Telegram. The campaign also wasn’t intended for people in mainland China, where Twitter remains blocked by the country’s Great Firewall. The suspicious Twitter activity might instead be targeting Chinese citizens living abroad. The story of China’s Great Firewall, the world’s most sophisticated censorship system However, it’s difficult to measure the impact of disinformation, Thomas said. It’s possible their purpose isn’t to change minds at all. Sometimes the goal is simply to spread confusion and doubt. For more insights into China tech, sign up for our tech newsletters , subscribe to our award-winning Inside China Tech podcast , and download the comprehensive 2019 China Internet Report . Also roam China Tech City , an award-winning interactive digital map at our sister site Abacus .