As survey researchers often looking to gain the biggest headline or snag the biggest client, we sometimes lose sight of the differences between the answers our data can provide, and those that it can’t. Ideally, we would treat our data as the key that helps unlock the latent constructs (i.e., feelings, beliefs, and/or behaviors) that underlie the responses we receive. But unfortunately, we, and the people who report on our polls, often neglect our data’s limitations. We speculate about the underlying reasons for particular results without making it clear to our audience where our data ends and our conjecture begins. Our musings can be taken very seriously, often more seriously than the actual toplines and crosstabs we report. Because the interpretations of our polls are what are most likely to shape public discourse and opinion, I believe that we have the responsibility to police ourselves and our colleagues when our interpretations stray too far from our data.
Within the past few months, at least two problematic poll reports were shared on the AAPOR listserv. My intention is not to single out these two polls as unusually problematic. They are simply a “convenience” sample that helps underscore a number of pervasive problems. In one of the reports, the public was told that:
In a case of serious buyer’s remorse, one-third of Americans say the United States would be better off if Secretary of State Hillary Clinton were president…[1]
In the other, we learned that:
The binary “pro-choice”/“pro-life” labels do not reflect the complexity of Americans’ views on abortion. Seven-in-ten Americans say the term “pro-choice” describes them somewhat or very well, and nearly two-thirds simultaneously say the term “pro-life” describes them somewhat or very well.[2]
Unfortunately, in both cases, these startling conclusions may not be well supported by the data. In the former, the inferences simply appear to fall outside the basic parameters described by the data. The latter case embodies a more subtle problem. In that case, the interpretation of the data takes the item responses at face value. I would suggest that interpretation of these particular items also needs to account for other potential influences on people’s answers, such as differences in knowledge and the poll’s social and political context.
The first of the two reports is the more blatantly problematic. It is difficult to square the reported data with the “serious” implication that Americans wish they had chosen Hillary Clinton over Barack Obama.[3] First, and most glaringly, if one-third of respondents said that the country would be better off with Hillary Clinton as president, then presumably two-thirds must not have said that. If anything, this suggests a surprising lack of buyer’s remorse, given that President Obama’s concurrent popularity ratings were only in the low- to mid-40 percent range. Second, the term “buyer’s remorse” assumes that the individuals who were remorseful are the same ones who were buyers. But the report doesn’t discuss the percent of individuals who supported Obama in a 2008 Democratic primary or caucus (i.e., “buyers”), and then changed their preference to Clinton (i.e., expressed remorse at their original choice). In 2008, the two candidates received nearly identical national vote totals among Democratic primary voters.[4] Accordingly, in a survey of Democrats, it should not even be particularly surprising to find that nearly half think the country would be better off under a Clinton presidency. But in fact, 57 percent of Democrats thought things in the country would be about the same whichever of the two Democrats were president. This simply does not seem to reflect a headline-worthy level of regret about making the choice of Obama over Clinton.
Perhaps just as astounding is that only 29 percent of respondents thought the country would be better off with John McCain as president, in contrast to 35 percent, who said the country would be in worse shape. But this story-line was largely buried. I can only surmise that this was an editorial choice. Perhaps “America’s electorate longing for Clinton” is a more compelling story line than “America’s electorate hasn’t changed its mind much regarding the 2008 General Election.” Although, given Obama’s aforementioned low approval ratings, the second story line strikes me as fairly compelling. It also has the advantage of seemingly being the more accurate of the two story lines. If anything, the appropriate headline for this poll may be the country’s “seller’s relief” at not electing McCain rather than its “buyer’s remorse” over electing Obama.
The second issue is more subtle than the first. In fact, the only reason it came to my attention is that I was called as a respondent in the poll, so I experienced the script live and in its entirety. The trouble involves often overlooked issues of context, including respondent knowledge and confidence, and a number of sources of bias particularly familiar to my psychology colleagues—social desirability and demand characteristics. In the poll, respondents were asked, in separate questions, to rate how well the terms “pro-choice” and “pro-life” describe them. At least 38 percent of respondents[5] indicated that both terms described them, at least “somewhat.”
Unfortunately, there are a number of unexplored potential explanations (other than respondent ambivalence) for the overlap in pro-choice and pro-life self-identification. The first involves participant knowledge and confidence. We often assume that our respondents have at least a basic knowledge of the contemporary political discourse with which we, as pollsters, are so familiar. But my experience, particularly in polls where I have included a response option such as “I don’t know enough about the issue to form an opinion” indicates that a notable proportion of the American public is not highly conversant with our inside-the-beltway political tropes. In these two poll questions, the key terms were not placed in a clear political context. It is very likely that if framed differently (e.g., “in the context of the debate about abortion, how well does the term pro-choice/pro-life describe you?”), the positive responses rate to each of those two terms, as well as the overlap between them, would have decreased.
The desire to provide answers that will be viewed favorably (i.e., social desirability) is a second, related potential interpretive confound. Assuming some portion of the sample did not know exactly what the questions referred to, or did know, but were not confident in their knowledge, what then? Speaking to another person on the phone, how likely would these respondents be to implicate themselves as not supporting people’s rights to make choices, or even worse, “admitting” they are pro-death! From that perspective, the overlap in support of these two emotionally laden terms may point to their power as political rhetoric as much as, if not more than, people’s ambivalence about the policies and underlying political beliefs the terms purport to represent.
But what of the people who were sure and confident about the meaning of those two questions? Even assuming the vast majority of respondents were in this camp, one cannot rule out the potential confound of demand characteristics. These questions were asked about half-way through the poll. They followed a number of other questions about abortion, as well as other hot-button social, moral, and religious issues.[6] In my own experience as a respondent, by the time I got to the pro-choice/pro-life questions, I already surmised that the poll had been commissioned by a socially conservative advocacy organization. A more typical poll respondent may not have come to that particular conclusion, but easily could have assumed the pollster was interested in assessing their moral standing. This would have been particularly problematic when asking these respondents whether they were pro-life. In that case, some percentage of the positive responses would reflect individuals who truly hold a pro-life political/moral position. But other positive responses may have come from individuals who were simply not willing to admit to a potentially morally critical interviewer that they are not pro-life.
In short, attempting to interpret these results reminds us that our polls aren’t simply collections of independent questions, but rather are conversations. Respondents’ answers to each particular question are influenced by every question that came before, and even by anticipation of the questions that will follow. So, for instance, it is likely that the endorsement rates for both the pro-choice and pro-life questions would have been different if they had been preceded by a long series of questions about non-social political issues (e.g., wind power, tax rates). Overall, because of the contextual elements of the poll, and of the pro-choice and pro-life questions in particular, it is not possible to take at face value the conclusion that Americans are ambivalent about being pro-choice and/or pro-life. That conclusion may be true. But the conservative interpretation would be that, without replication under different contextual conditions, we simply can’t be sure.
As with most of us, I have certainly asked my share of difficult-to-interpret survey questions. For instance, a number of frustrated journalists have asked me to comment about an annual study in which the public rates its confidence in “educational leaders” and “military leaders”, among others.[7] The reporters rightly want to know what the reference points are for these questions—do respondents view educational leaders as school principals, or college presidents? Are military leaders top Pentagon brass, or the boots-on-the-ground officers who lead our forces into battle? My usual response—that we don’t know for sure, and would have to ask follow-up questions in order to find out—doesn’t tend to go over particularly well. In fact, I have been told by a number of reporters that I can be frustratingly unquotable.
My hyper-cautious tendencies may be a by-product of my experiences with peer-reviewed academic publishing. But I am not ultimately suggesting that our field would benefit if we were all so compulsively guarded to the point of being cryptic. If that were to happen, it’s a good bet that the media and public would simply stop paying attention to us altogether. In fact, I would argue that the media and public rely on us to speculate about the meaning of our data. If we don’t interpret our results, the job will be left in the hands of journalists and political pundits. In that scenario, we would have no influence left to promote the idea that our toplines should dictate the headlines, rather than the other way around. What I do suggest, however, is a professional vigilance on our part. We should consistently be willing to recognize the point at which our data ends (or becomes difficult to disentangle), and our own interpretation begins. And even more importantly, we should unfailingly acknowledge to the consumers of our work every time we cross that line. That may leave us a bit less likely to turn public perception on its head. But it may also broaden the public’s trust in us, so that when we do find something groundbreaking, it will be more likely to promote constructive civic discourse than sensationalistic headlines.
Acknowledgment
Thank you to Rebecca Tuhus-Dubrow for providing a journalist’s perspective.
http://www.politico.com/news/stories/0911/63669.html#ixzz1e0NhNj6w
David C. Wilson examined this news story critically using supporting data from other polls (see http://www.huffingtonpost.com/david-c-wilson/run-hillary-run-and-the-s_b_1065022.html). I suggest further that the story is also not supported by the data from within the source poll on which it is based.
http://www.realclearpolitics.com/epolls/2008/president/democratic_vote_count.html
38 percent is calculated as the minimum overlap of positive responses to two items with respective 70 percent and 67 percent positive endorsement rates.
http://www.centerforpublicleadership.org/index.php?option=com_content&view=article&id=355&Itemid=87