Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
Google’s automated system for matching marketers with websites sometimes places advertisements for brands on sites with which they would prefer not to be associated. Photo: Reuters

Google helps place ads on sites amplifying coronavirus conspiracies

  • Research group the Global Disinformation Index has found that Google placed ads on sites that run baseless claims about Covid-19
  • The ads were placed through Google’s automated system for matching marketers with websites

Google has taken aggressive action to scrub coronavirus conspiracies from its news service and YouTube, at a time when social media companies have come under intense scrutiny for their potential to spread dangerous disinformation about the global pandemic.

It has begun labelling misleading videos aimed at US audiences, and has joined with other major internet companies to coordinate a response against what the World Health Organisation has described as an “infodemic”.

But Google is also placing advertisements on websites that publish the theories, helping their owners generate revenue and continue their operations. In at least one instance, Google has run ads featuring a conspiracy theorist it has already banned.

One ad for Veeam, an independent Microsoft 365 backup service, appeared atop one website, which feature an article that includes false claims about Microsoft Corp founder Bill Gates’ charitable efforts on pandemics and vaccines are part of a world domination plot.

A Microsoft Teams ad ran with a French language article that alleged Gates tried to bribe Nigerian lawmakers to vote for a Covid-19 vaccine. An ad for the telecommunications provider O2 showed up on another article linking the virus to 5G mobile networks, a common conspiracy theory. The ads were placed through Google’s automated system for matching marketers with websites.

How the coronavirus is testing social media’s efforts to stem the flow of fake news amid global public health crisis

The Global Disinformation Index, a research group, recently reviewed 49 sites running baseless claims about the coronavirus, including the stories about Gates and 5G networks. Alphabet’s Google placed ads on 84 per cent of them, generating most of the US$135,000 in revenue the sites earned each month, according to the Global Disinformation Index’ estimate.


Google has faced criticism for funding hyper-partisan publishers such as Breitbart News in the past. The company has avoided making blanket policies about which publishers can run its ads. Instead, it removes ads only from the specific pages carrying content that violates its content policies. It also allows advertisers to blacklist specific sites. The company has been particularly reluctant to take action with political ramifications now that the Trump administration is taking concrete action to punish companies that it argues show bias against conservative viewpoints.

Christa Muldoon, a Google spokeswoman, said none of the web pages flagged by the Global Disinformation Index violated its policies. “We are deeply committed to elevating quality content across Google products and that includes protecting our users from medical misinformation,” she said. “Any time we find publishers that violate our policies, we take immediate action.”

Google scrubs coronavirus misinformation on search, YouTube

Google’s network ad system is a massive machine for automatically generating money for its owner. Websites apply for Google’s programme, and they add display banners and pop-ups advertisements to their pages. Google’s system automatically fills these slots with digital marketing and takes about 30 per cent of the revenue they generate. Although Google offers a level of control to its marquee advertisers, the self-service system sometimes places ads for brands on websites with which they would prefer not to be associated.

Google’s systems have recently placed ads for eBay, Oracle Corp and HBO on websites like, and, all of which routinely publish conspiracy theories, according to the Global Disinformation Index.

Another company that placed ads on the sites in the study was Criteo. When contacted by a reporter about an ad mentioned in the report, Luca Sesti, a spokesman for the company, said it was breaking off its commercial relationship with the website in question, “In the event we find a partner is not adhering to our policies, we will terminate the relationship immediately,” he said. “We recognise that the dissemination of inaccurate information through ‘fake news’ is a very real problem on the internet.”

5G virus conspiracy theory fuelled by coordinated effort involving bot accounts, researchers say

Often the ads the researchers found made for uncomfortable pairings. The O2 ad ran alongside an article promoting false claims that 5G wireless technology causes people to experience symptoms of coronavirus because it “poisons their cells.”


“This is a huge issue that Google needs to tackle now,” said Craig Fagan, programme director at the Global Disinformation Index. “It is creating a financial incentive for these websites to continue promoting the conspiracy theories. You go to these sites and there are ads galore, pop ups everywhere. The ads are there to get clicks, monetising each reader.”

In one case, Google accepted ad revenue from a company promoting a conspiracy theorist it tried to remove from its own platforms.


In early May, YouTube removed the account of David Icke, a British provocateur who often ranted about “Rothschild Zionists” controlling global institutions and has questioned the efficacy of vaccines. In a recent interview about Covid-19, he said that 5G makes people sick and sends out signals that can control their emotions. Icke had posted on YouTube for more than 14 years.

In a pandemic, lies cost lives. Misinformed people put us all at risk through their reckless actions
Imran Ahmed, CEO of the Centre for Countering Digital Hate

Guillaume Chaslot, a former Google engineer and founder of the research group AlgoTransparency, estimated that Icke’s YouTube channel gained 200,000 subscribers during March and April, when he largely touted unproven theories about the virus. Chaslot’s research tracks how often YouTube’s recommendation system sends viewers to particular videos and channels. In a 10-year span, YouTube promoted Icke’s videos about a billion times.


YouTube removed Icke’s account for violating its rules about coronavirus disinformation. Since then, Icke has appeared on other YouTube channels and in YouTube ads for Gaia Inc., a streaming network that promotes yoga and alternative healing. “We have to break out of this perceptual prison,” Icke said in a voice-over during an ad that ran weeks after his ban. Gaia’s network runs several shows featuring Icke. On a recent earnings call, Gaia executives said YouTube had become a “pretty significant” way to get new subscribers.

Gaia did not respond to requests for comment.

Imran Ahmed, chief executive of the Centre for Countering Digital Hate, a UK non-profit, argues that social media platforms should remove Icke entirely. “In a pandemic, lies cost lives,” said Ahmed. “Misinformed people put us all at risk through their reckless actions.” His group estimated that Icke earned about US$177,000 a year from YouTube ads before the ban.

In early May, YouTube removed the account of David Icke, a British provocateur who often questioned the efficacy of vaccines. Icke, who has posted on YouTube for more than 14 years, recently said that 5G makes people sick and sends out signals that can control their emotions.Photo: Agence France-Presse

Jaymie Icke, a spokesman for Icke’s video service Ickonic, said the earnings estimate was inaccurate because YouTube has restricted ads on controversial videos for several years. “Revenue is nothing and has been for a while,” said Icke, who is David Icke’s son. “They removed all ads from the channel two months prior to the full deletion anyway,” he said. “So that figure has simply been made up.”


Icke and others blocked from the site are allowed to appear on other accounts and in ads as long as those videos do not break rules, according to Muldoon, the Google spokeswoman.

While web giants like Google have tried to handle conspiracy theories on their user-generated services, they have also tried to reform their ad systems to handle the growing problem. In October 2018, Google and Facebook signed a European Union code of conduct on disinformation that contained a commitment to “improve the scrutiny of advertisement placements to reduce revenues of the purveyors of disinformation.”


Chinese respiratory disease expert on origins of Covid-19 and Wuhan virus lab conspiracy theories

Chinese respiratory disease expert on origins of Covid-19 and Wuhan virus lab conspiracy theories

According to Fagan, however, the issue remains a blind spot for the companies. Some of the conspiracy websites attract a large number of visitors, promoting their content across social media platforms.

The 49 websites promoting Covid-19 conspiracies that were reviewed by the Global Disinformation Index were just a small sample and offer a snapshot of a much larger programme, Fagan said. Last year, the Global Disinformation Index published a study of about 20,000 websites promoting disinformation and conspiracy theories. It estimated that they were generating US$235 million every year in advertising revenue, around US$86.7 million of which was paid out by Google.