The near collective failure by political pollsters to predict Donald Trump’s stunning victory in the US presidential election is the latest in a string of high-profile mistakes by the industry, and some experts say it signals the need for urgent introspection on how data is collected, analysed and interpreted.
The Republican nominee staged unexpected victories in several marginal states including Ohio and Florida to sweep into the White House, upending most final surveys that predicted Democrat Hillary Clinton would win by a margin of between two and four percentage points.
Watch: Trump wins White House
The failure by prominent experts like Nate Silver of the FiveThirtyEight portal as well as international news outlets follows erroneous predictions on the outcome of Britain’s Brexit vote and Colombia’s peace deal referendum this year as well as Britain’s general election last year.
“The entire polling industry – public, campaign-associated, aggregators – ended up with data that missed tonight’s results by a large margin,” Sam Wang, a Princeton University data scientist, wrote on the polling website Princeton Election Consortium.
“There is now the question of understanding how a mature industry could have gone so wrong,” wrote Wang.
Singapore-based polling expert Benjamin Detenber said one reason may have been the higher than usual turnout – 65 per cent as opposed to the usual 60 per cent.
“The increase may well not have been proportional... the increase in the number of Republicans that showed up to vote was greater than the increase of Democratic voters that turned out,” said Detenber, an associate professor of communications studies at Singapore’s Nanyang Technological University.
“This would have obviously benefited the Republicans, but it would have also made the weights that many pollsters use on their public opinion surveys less accurate,” Detenber said.
Voters’ sense of embarrassment in being associated with Trump – whose crass comments about women, Muslims and immigrants sparked international outrage during his campaign – may also have skewed the data.
Patrick Sturgis, a professor of research methodology at Britain’s Southampton University, said the media were partially responsible.
“Polls are not really capable of detecting small changes from one candidate to another of one or two percentage points, they simply are not that precise,” Sturgis said. “Yet the horse race story seems to demand that small changes like this drive the news agenda.”
Asia-based pollsters said the failure underscored long running problems in the industry, including low response rates from key demographics. “There is a question of whether the people who are responding to the polls are the same people who turn out to vote,” said Ibrahim Suffian of Merdeka Centre, a Malaysia-based polling company.
“People who are less educated or earning lower incomes have no time to entertain the pollsters and researchers or to discuss politics, but yet they have strong opinions on the political situation and [credibility] of candidates. This will be reflected in the ballot box but not in the surveys.”
David Black, managing director of Singapore-based Blackbox Research, said state-of-the-art technology was not improving pollsters’ success. Contemporary techniques were still deficient in taking into account the “psychological state of people” when they were surveyed, Black said.
Sturgis, the British expert, said the US polling industry was likely to “undertake a thorough investigation of the kind carried out after the [British] general election”.