Responses to my article in the August 2011 issue of Survey Practice, “Contemporary Issues with Public Policy Polls,” suggest there are possibly three general schools of thought with respect to measuring opinion. These are somewhat arbitrary classifications, and I don’t hold to them tenaciously, but I think they may be useful as a heuristic device to stimulate discussion about the various views that scholars and practitioners have about “public opinion.”
Literalism
At one end of the spectrum is Literalism , what Howard Schuman (in the article in this issue) refers to as “survey fundamentalism” – “the belief that some polls tell us the literal truth about public opinion.”
I believe that most public policy polls reported by the major news media outlets fall into this category. Associated with this school is the implicit, though prevalent, view that public opinion is what the pollsters say it is – regardless of whether pollsters measure intensity of opinion, non-opinion, or hypothetical opinion (see my Survey Practice article for an elaboration of these points).
Often times, pollsters asking about the same issues will produce conflicting results, but rather than assessing which results might be more valid, the effort is to harmonize the results – to explain that differences occur because of legitimate differences in question wording, questionnaire context, and timing, but that essentially all are valid measures of a complex issue. Rarely are judgments made that perhaps one approach is more valid than another. Long live diversity. The more polls, the more nuances we see in what the public is thinking. Pollster.com on Huffingtonpost is a good source for this type of no-fault analysis. I wrote many such articles when I was employed by the Gallup Organization.
Nihilism
At the other end of the spectrum is what I would call Nihilism. It’s the notion, articulated by Schuman and Scott (1987), that public opinion measures are so tenuous that no matter how carefully worded, they cannot provide a valid measure of public opinion. The solution to this problem, the authors write, “requires giving up the hope that a question, or even a set of questions, can be used to assess preferences in an absolute sense” In his article in this issue, Schuman writes that “Study of change over time or of the differences between educational levels, can provide a plausible basis for a judgment about public opinion, but the marginals in any simple sense should almost never be taken literally, no matter the wording.”
The implication here is similar to the Literalism school – that almost any question wording approach is no better than any other, at least in the sense of providing an accurate picture of public opinion. But rather than say they all provide valid measures of public opinion, this school holds that none of them provides valid measures, because public opinion itself is too nebulous a concept to measure in any absolute sense.
That’s Schuman’s criticism of the bike lane expansion example I present in the August 2011 Survey Practice article. “Were New Yorkers faced with voting in a referendum on the bicycle lane issue, it’s hard to know which of the questions [presented in a split sample experiment] would be more predictive, if we take predictive validity to be important.”
(Just a reminder: One question showed a substantial majority support for expansion of bike lanes, with a paltry 4% without an opinion – though the same poll showed 40% of respondents paying little to no attention to the issue, and only 28% paying a lot of attention. The other question showed a little over a fifth of the public in favor, about the same amount opposed, and just over half with no meaningful opinion.)
That Schuman suggests it’s impossible to make a judgment as to which of these two wildly different results provides a more realistic assessment of public opinion is consistent with what I term the nihilistic school of thought.
Explicit with this school of thought is an indeterminate definition of public opinion, which essentially argues that public opinion is too vague a concept to permit any poll to actually measure what it is.
Realism
In between the two ends of the spectrum is what I would call the Realism school of thought. It holds that polls can give a meaningful measure of public opinion, even in an absolute sense, if they are conducted correctly. It takes into account both non-opinion and opinion intensity, and attempts to differentiate – in the words of Daniel Yankelovich (1991) – between the public’s “top-of-the-mind, offhand views (mass opinion) and their thoughtful considered judgments (public judgment)” – which Yankelovich criticizes most media polls for failing to do.
In the bike lane expansion issue mentioned earlier, the Realist school would argue that a realistic picture of the public, taking into consideration both admitted non-opinion and intensity, suggests that the opinionated public is about evenly divided over the issue, with a little more than half of the residents so unengaged in the issue, they have formulated no meaningful opinion. The exact percentages are less important than the overall picture.
This interpretation clashes with the Literalist school’s view, which accepts the technique of pressuring respondents to make a choice, resulting in 96% appearing to have an opinion – when initially not even a third were following the issue closely.
Implicit in the Realist school of thought is that opinions, as opposed to non-opinions, are views that respondents feel strongly enough about that they want their elected representatives to take such views into account. That was George Gallup’s explicit view (in Gallup and Rae, 1968), when he said polls could provide elected leaders with an ongoing picture of what the public was thinking, so public opinion could be incorporated into leaders’ decisions.
Survey Practice Articles
I would classify all the articles in this issue, except for Schuman’s, in the Realism camp. Initially it appeared that the article by George Bishop and Stephen Mockabee embraced Nihilism. Their critique suggests that measurements over time, even using the same question, do not necessarily provide a realistic picture of trends in public opinion – because the meaning of the questions (even if identical at all time periods) could change from one period to the next. But when I suggested to them that their critique implied no meaningful measures over time could ever be taken, they added a section that recommends using various experimental methods – among them the random probe (originally described by Schuman, 1966) – to clarify how respondents interpret the questions.
The other articles in this issue of Survey Practice all clearly imply that polls can meaningfully measure public opinion (the “will of the public”) on an absolute basis, but only if the polls are conducted properly.
Mike Traugott’s concern, for example, is that pre-election polls this year are producing a large variance in their estimates (i.e., significantly different results from each other), which suggests disaster this presidential election season, similar to what happened four years ago in the Democratic nomination contests, when “the pre-election polls systematically underestimated the winner’s share of the vote by an amount that was typically greater than sampling error would admit.” But it’s not clear this year why there are such divergent poll findings, because poll methodologies are not fully available. Traugott would like all the polling organizations “to be more forthcoming about their methods now rather than trying to recover such information after the fact.”
Traugott also objects to the use of national polls of Republicans to characterize how primary voters feel, because the results don’t necessarily reflect the views of early caucus and primary voters in Iowa and New Hampshire. “National polls that include Democrats and Republicans in their samples do not provide any guidance about what might happen in the caucus and primary in these two states and in fact they may be confusing some journalists who are covering the first two events.” Yet national polls are widely used to talk about primary voters (mostly because it’s easier to poll nationally than it is to poll in individual states, even if the national polls are of the wrong electorate).
Seth Rosenthal gives two extended examples of how the wording of the questions and the actual results were not consistent with the widespread interpretation of those results. Often pollsters have to speculate on the meanings of their results, given the ambiguity of some questions, and here Rosenthal writes that “we should consistently be willing to recognize the point at which our data ends (or becomes difficult to disentangle), and our own interpretation begins.”
In an extensive research article, Mark Nance and Michael Cobb examine the consequences of not measuring non-opinion in the area of trade. Their conclusions are worth noting: “First, non-attitudes appear rampant. Secondly, they alter the aggregate distribution of trade preferences, in many cases changing whether a majority supports or opposes it. Overall, our early findings suggest that the variables of most interest to researchers in this field may be affected by non-attitudes and, as such, researchers should be careful to account for the impact of non-attitudes in their analyses.”
Sid Groeneman concurs that in public policy polling, “non-opinions are frequently overlooked or too casually glossed over.” He later notes: “I might also add that findings of low intensity views, like extensive non-opinions, can run counter to the agenda of poll sponsors, who may resist disclosure of such results.”
In the other research paper in this issue, Patrick Murray reports on an example of hypothetical public opinion. The issue: Whether New Jersey Senator Frank Lautenberg, at age 84, was too old to run for re-election in 2008. When respondents were given his age, a majority said he was too old; when respondents were not told his age, a majority said he was not too old. Murray concludes: “Informing the sample of Lautenberg’s actual age skewed the results in a way that no longer reflected what the population of voters actually felt about Lautenberg’s age, but rather how they may have felt if everyone was aware of his age. In reality, most voters did not consider his age to be an issue, either because they underestimated his age or simply did not know what it was and the issue was not salient.”
By not distinguishing between hypothetical opinion and actual extant opinion, the Quinnipiac Poll, which consistently informed its respondents of the Senator’s age, led to misleading media stories about the damaging age factor in the campaign.
Murray’s caveat about how to deal with hypothetical opinion is important: “The bottom line is if you are measuring the potential salience of factual information on opinion formation then be forthright about what you are doing. If, on the other hand, you wish to tap extant opinion representative of a larger population, make sure your question does just that. How pollsters present their findings has as much, if not more, of an impact on the public debate as the questions and results themselves.”
Groeneman agrees that asking questions that produce hypothetical opinion is a useful way to speculate about what public opinion might be, but in such cases pollsters need to carefully qualify their presentation of the results to avoid giving the impression the results represent what the public is already thinking.
Finally, in his commentary in this issue, AAPOR’s current vice president and president-elect, Paul Lavrakas, makes three suggestions “for improving the value that public policy polling has for our nation’s decision-makers and the public at large”: conduct more question wording experiments, pay more attention to the quality of public opinion as described by Daniel Yankelovich, and improve the data analysis.
Lavrakas is particularly concerned about the quality of opinion – differentiating between “offhand views” and “considered judgment,” or what I earlier characterized as the whim vs. the will of the public. He writes: “Here is an arena that I believe AAPOR can and should make a much more muscular effort to raise the quality of public opinion polling by providing more education to public policy pollsters and editors/journalists about how to better measure and interpret the public’s opinion on matters that matter.”
However, Lavrakas is skeptical about improvement in polls: “I am not sanguine that any of these suggestions will be implemented soon or that a meaningful change will result in the quality by which public policy pollsters measure public opinion.” Still, he writes, it’s “another area that AAPOR can (and I believe should) take more aggressive action in the coming decade.”