David Moore’s August 2011 article highlights three enduring challenges facing public opinion researchers. First off, non-opinions are indeed frequently overlooked or too casually glossed over. My approach to minimizing this problem is to judge whether all or nearly all to be surveyed can reasonably be expected to hold a pre-existing view about the policy, program, issue, individual, or institution of interest. If the answer is yes, then the response options presented may exclude “don’t know,” “not sure,” or “no opinion.” If the answer is no—probably the more common situation—then one such option should be included. In cases when “don’t know,” “not sure,” or “no opinion” is not offered, it should be accepted by respondents who volunteer it. Whether or not “don’t know,” “not sure,” or “no opinion” is presented, it is generally good practice to make it easy for respondents to opt out by noting that “some people have not heard of or thought much about” [topic]. Admittedly, judging the extent to which all are likely to hold an opinion is not always easy. When uncertain, it is better to assume not. One further point: it is often important to try to distinguish respondents who have given some thought to the issue and come down in between (neither for or against but with some qualified position) from those who have not thought about it at all (true non-opinions).
Holding an opinion with very low intensity approaches being a non-opinion. Opinion objects with low salience—those that an individual does not care much about one way or the other—should be identified and estimated in a comprehensive analysis. Moore nicely illustrates how failing to account for intensity can produce misleading conclusions. Thorough reporting will differentiate between opinions held with varying levels of intensity. Researchers should recognize, however, that being attentive to opinion intensity has two downsides: (1) It either requires that a follow-up question be asked for each opinion being measured or that the response options presented be in the form of a multi-point scale, which can be cumbersome in telephone surveys; and (2) It interferes with crispness (or required brevity) in reporting poll results. I might also add that finding low intensity views, like extensive non-opinions, can run counter to the agenda of poll sponsors, who may resist disclosure.
Thirdly, it is not uncommon for pollsters to try to “educate” respondents about issues which are new, complex, or otherwise thought to be not well understood. This is often done by presenting both “pro” and “con” positions, sometimes resulting in a lengthy or confusing question. Moore describes the results from such questions as “hypothetical opinion” because the responses are contingent upon the additional information presented, which some did not have prior to being interviewed. In addition to the difficulty of presenting a balanced assessment in this manner, Moore points out that the practice has been criticized for misrepresenting potential opinion as existing opinion. Eminent pollster Warren Mitofsky was a strong opponent of this practice. I admit to having drafted opinion questions which attempt to educate respondents and am no doubt guilty of not qualifying the reporting of those results appropriately. Nevertheless, a blanket prohibition of this approach to question writing would be going too far, as it would eliminate many issues from consideration that we would like to address. Nevertheless, when choosing to administer such a question, it is essential to characterize the results as public opinion given the arguments or positions presented in that particular version of the question.
We would do well in our work to remain cognizant of the potential for non-opinions, low intensity opinions, and manufactured opinions.
Groeneman, Sid. 2011. “Some Reflections on ‘Contemporary Issues with Public Policy Polls’” Survey Practice, December: www.surveypractice.org