Advertisement
Advertisement

Explaining why so many forecasts fail

Why did economists fail to predict the 2008-09 financial crisis? Why didn't seismologists see last year's Japanese earthquake coming? Why did US intelligence miss both September 11 and Pearl Harbour?

This clever book, written by a political and sports predictions expert, looks at the techniques behind predictions and explains why they generally fail. It's a neatly written work that is better researched and argued than its chatty, off-the-cuff style suggests.

by Nate Silver

The Penguin Press

 

Why did economists fail to predict the 2008-09 financial crisis? Why didn't seismologists see last year's Japanese earthquake coming? Why did US intelligence miss both September 11 and Pearl Harbour?

This clever book, written by a political and sports predictions expert, looks at the techniques behind predictions and explains why they generally fail. It's a neatly written work that is better researched and argued than its chatty, off-the-cuff style suggests.

The book's premise lies in the term "data-driven predictions", a way of making predictions that is a result of the amount of data available in this information age. Data predictions involve crunching huge amounts of information - "big data" - to arrive at reliable predictions for the future.

But data-driven predictions rarely work, says Silver. Much of the data is irrelevant or incorrect, and this leads to inaccurate predictions. The skill is to find the signal in the noise - the relevant data in the slew of garbage. While this is easier in some fields, it's still a near-impossible task.

To make his point, Silver examines the techniques of economists, weather forecasters, seismologists, political analysts, and a host of others who make a living from trying to predict the future.

Each field has its own methods and failings, but there are some generalities. Complexity theory is one reason for the failures: some systems, like earthquakes, involve so much data it's probably impossible to amass enough of it to make a correct prediction. Others, like weather, involve chaos theory. Minute global fluctuations in data have big effects elsewhere, and it's impossible for the forecasters' computers to keep up with them - if they notice them in the first place.

Silver begins his investigation by assessing why economists failed to predict the recent financial crisis. The major reason, he concludes, was that investors bought credit-default options (CDOs) - the financial packages that ultimately brought down the system when those with sub-prime mortgages in the US started to default - because the packages had high ratings from trusted agencies such as S&P.

But the ratings, which relied on predictions of what was likely to happen in the real world, were not based on any kind of relevant data as CDOs were a relatively new and innovative product. The ratings were the result of approximations based on conjecture, and they were horribly wrong: economists predicted a mortgage default rate of 0.12 per cent, but the actual rate was 28 per cent.

Forecasting gives those who use the results some leeway in how they apply them, Silver says, and in the final chapters of the book attempts to shore up the prediction trade with Bayes theorem, a mathematical formula that emphasises probability over certainty.

By then, however, he has thoroughly demolished the credibility of the pursuit.

Post