In the aftermath of the 2008 election, several news stories (here and here and here) have already announced that the polls were mostly accurate in their final predictions of the presidential contest. These encomiums to the polls, however, overlook the fact that during the campaign – even in the last couple of weeks – many polls provided contradictory estimates and trends. Ultimately, most polls converged to a point reasonably close to the outcome, but that raises an intriguing question about why such a convergence occurs (as it has in other presidential elections), and what that means about the “accuracy” of polls during the campaigns.
Of course, there is no objective way to assess whether the polls are “accurate” during the campaign, but we can say that not all of the polls were right – because they often contradicted each other.
Thus, in mid-October, the Pew poll showed Barack Obama up by 14 percentage points over John McCain, while the AP/GfK poll found Obama leading by just 1 percentage point – a statistically significant difference of 13 points. A week later, Pew reported a 15-point lead, compared with just 3-point leads reported by IBD/TIPP and GWU – again, a statistically significant difference of 12 points. And polls completed on Sept. 7 by Gallup showed McCain leading by 10 points, while IBD/TIPP showed Obama up by 5 points – a statistically significant swing of 15 points.
These are cherry-picked results, of course, but a systematic analysis shows that the above examples simply illustrate the variability of the poll results that were being reported – until the final pre-election polls. At that time there was a substantial convergence of results.
Shown below are the variances in the lead that Obama had over McCain reported by the polls during the dates indicated. The final week of the campaign is broken into two segments – the final, final days (Nov. 1–3), and the previous four days (Oct. 28–31).
Note that Obama’s average lead each week varies only slightly over the whole month of October, with a range of less than two percentage points. Still, the variability of the polls is quite substantial each week, in comparison with the variability in the final three days (where we see the results of the final pre-election polls).
The major question raised by these results is – Why do different polls show such variability over the month of October, and then suddenly converge in the last week of the campaign? Of course, it’s true that opinions “crystallize” in the final weeks, but why should that make polls so relatively unreliable during the campaign? Shouldn’t polls conducted at the same time produce the same results, even if many people are still mulling over their decisions? Shouldn’t different polls find the same proportion of indecisive people?
If it turns out that polls cannot produce “reliable” (consistent) results when many people are still thinking about an issue, what does that say for polls during non-election periods – when polls ask people to express their views about specific policy matters? For the most part, these issues are not the subject of months-long campaigning, like the presidential election, and there must be many people whose ideas are not fully crystallized. Yet, does that prevent pollsters from reliably measuring public opinion on these matters?
Next month, the Survey Practice editors will be asking many of the media pollsters who polled throughout the election campaign what their explanation is for the convergence phenomenon, and what its significance for the reliability of polls may be more generally. In the meantime, we welcome any contributions that you readers think would help address this convergence mystery. You can either send your comments in a message to survprac@indiana.edu or post in the comments below.