The mathematics does not lie

20 May 2019

In the wake of Saturday's shock election result, Professor Brian Schmidt examines why polling got the outcome so wrong.

Everyone in my office grew sick last week of my continual complaints about the state of the political polls. Not because of any insights into the results they were predicting, but because they were all saying the same thing with a collective similarity that violates the fundamentals of mathematics.

Since the election was called, there were 16 polls that published two-party preferred results ahead of Saturday's vote. Every single one of them predicted the LNP winning 48% or 49% of the two-party preferred vote, with Labor winning 51% or 52%.

These polls were central to the public's perception of this election, with everyone, including the media, ignoring the polls' underlying uncertainties. These uncertainties typically far out-weighed most of the conclusions drawn from the poll results.

In 2019 it's hard to get a poll right. No longer is there an easy way to phone a random sample of people at home using the White Pages. Those people who are contacted are less likely to agree to be surveyed than in decades past. This means that getting a random sample that really represents Australia is harder than ever. 

But the one thing that is almost impossible to avoid is what is called sampling error. This uncertainty in a poll is caused by talking to a subset of people, rather than everyone. And no matter what you do, except polling more and more people (which is very expensive), you are stuck with it.

You can think of the uncertainties in the polls much like what happens when you flip a coin 10 times. You can expect to get the 'right' answer of five heads quite frequently, but not every time. It turns out mathematics tells us that you'll only get five heads 25.2% of the time.

If you do a similar calculation for the 16 polls conducted during the election, based on the number of people interviewed, the odds of those 16 polls coming in with the same, small spread of answers is greater than 100,000 to 1. In other words, the polls have been manipulated, probably unintentionally, to give the same answers as each other. The mathematics does not lie.

I say unintentionally because humans are biased towards liking to get the same answer as everyone else. We often make subtle choices, even in quantitative analyses, to get the answer we expect. Commonly called confirmation bias in science, many of the large experiments in physics and astronomy hide the answers of an analysis from researchers until they are completely done to avoid this effect. 

I don't know why the polls so badly missed the election's actual result. But whatever led to the five polling companies to illegitimately converge on the same answer, must be a significant contributor. All five need to have a thorough and independent investigation into their methodologies, and all should agree to better reflect uncertainties in their future narratives. 

The last five years have demonstrated to me the fragility of democracy when the electorate is given bad information. Polls will continue to be central to the narrative of any election. But if they begin to emerge as yet another form of unreliable information, they too will be opened up to outright manipulation, and by extrapolation, manipulation of the electorate. This is a downward spiral our democracy can ill afford.

Professor Brian Schmidt is a Nobel Laureate in physics and the Vice-Chancellor of The Australian National University.

This article was first published in The Guardian.