Introduction
Counterarguments are not a new question format in survey research, but remain little examined. In this format, a respondent is first asked his or her position on an issue (the “prior question”). Depending upon that answer, the respondent is either immediately or several questions later asked one or more questions containing seemingly conflicting information and asked whether he or she still would give his or her earlier answer or would now change answers. Together the prior question and the later counterargument comprise a “cluster” of questions making up the counterargument format. For example, in a March 2017 survey sponsored by Kaiser Family Foundation, respondents were initially asked if they supported or opposed a requirement that “all private health plans must include coverage for maternity care.” Those who initially supported the requirement were then asked, “(W)hat if you heard that the requirement … means some people have to pay for benefits they do not use?” Those who initially opposed the requirement were asked, “(W)hat if you heard that without a requirement … policies that do include maternity care would become very expensive and unaffordable for some people who need maternity service?” In this instance, 28% of those initially in support switched to opposition, don’t know, or refuse, while 43% of those initially in opposition switched to support, don’t know, or refuse.
The iPOLL archive’s earliest counterarguments long predate scholarly research on the use of counterarguments in studying political tolerance and racial issues, described below. The earliest counterargument is from a September 1939 Roper survey for Fortune magazine. Gallup’s first counterargument appears in September 1950. However, few examples appear until the 1970s when computer-assisted telephone interviewing made asking counterarguments easier. By the mid-1970s, if not even earlier, political campaign polls and market research were incorporating counterarguments to identify persuasive messages (Marshall 2016, 87–95).
Beginning in the 1990s, academics began to incorporate counterarguments, particularly in studying political tolerance and racial attitudes. Results from the United States, Russia, South Africa, Denmark, and Canada point to three conclusions. First, presented with counterarguments, many respondents will change their position within the same survey with switching rates ranging widely from one-tenth to three-fifths of respondents and averaging one-third to one-half of respondents (Fletcher and Chalmers 1991; Gibson 1998; Peffley, Knigge, and Hurwitz 2001; Petersen et al. 2011; Sniderman et al. 1996; Sniderman and Piazza 1993; Tate 2003). Second, switching is especially common among those who were initially tolerant toward unpopular or controversial groups rather than among those who were initially intolerant (Gibson 1998; Gibson and Gouws 2002; Peffley, Knigge, and Hurwitz 2001) see also (Marcus et al. 1995; Sullivan et al. 1993). Third, switching is most common among respondents whose initial attitudes were less strongly held, uninformed, or more conflicted, or if the target group queried was perceived as violent or unconventional (Peffley, Knigge, and Hurwitz 2001; Petersen et al. 2011).
Methods
This article is the first to examine switching rates across a wide variety of survey topics and situations. Data are drawn from the major American online database, iPOLL, available at https://ropercenter.cornell.edu/ which archives surveys from many media, research foundations, and other public-release polls. Surveys from iPOLL were searched for all identifiable counterarguments by phrases and words commonly used in this format such as “what if you heard,” “what if you knew,” “what if this meant that,” “after hearing this,” “still,” “suppose,” or “now.” A total of 138 clusters, which include 287 counterargument questions, were identified. These clusters date from 1973 through March 2018. The clusters vary widely by issue topic. The most common topics include health care (54% of all clusters), military conflicts (12%), retirement and social security (7%), the federal budget and spending (7%), and (nonmilitary) world affairs (7%). Kaiser’s various pollsters, Gallup, CBS/New York Times, and Harris Poll provide the largest number of clusters. About half of these 138 clusters include two counterarguments, that is, one apiece for the initial “favor” or “oppose” position; the remaining clusters include either multiple counterarguments or else only a single counterargument for the prior question.
Analysis and Results
This meta-analysis tests six possible predictors of switching from a previously-given answer to an opposing position, don’t know, or refused. These include the topic involved, respondents’ initial attitudes and interest on the prior question, the counterargument question format and content, survey artifacts, house effects, and mode effects. Because the iPOLL database includes a wide variety of questions across many surveys, many different predictors for switching can be tested, including some never before examined. Some of the hypotheses, described below, were identified from past studies, but for others, no past literature exists, and the hypothesis is offered only as plausible.
First, switching is common across all major topics and occurs at rates similar to those reported for tolerance and racial attitudes. For all these 287 counterarguments, the switching rates range widely from a low of 2% to a high of 86%, and average 38% (standard deviation of 17%). For the most commonly-asked topics, switching rates averaged 38% for health care; 37% for military conflicts; 35% for retirement and social security; 50% for the federal budget and spending; 37% for world affairs; and 35% for miscellaneous other topics (standard deviations of 15%, 19%, 12%, 22%, 21%, and 17%, respectively). Lower switching rates are predicted on issues typically of greater public interest, such as wars, or of more personal familiarity to the public, such as retirement and social security. These topics and other variables are also reconsidered in a multivariate model below.
The second predictor tested to explain switching rates includes four indicators of respondent initial attitudes and interest. All indicators were widely reported in previous studies. A respondent’s initial ideological position is measured as whether the respondent gave a politically liberal versus conservative answer (coded 1, 0, respectively). Previous studies point to liberal answers as more subject to switching (Feldman and Zaller 1992) cf. (Gainous 2008; Jacoby 2002). Respondents, who initially favored the prior question, predicted as linked to higher switching rates (Schuman and Presser 1981), are coded 1, otherwise 0 if a respondent opposed the prior question. Strength of attitude is measured by whether the respondent strongly or only somewhat favored (or opposed) the initial question (coded 1, 0, respectively); strength of attitude was asked for 111 of these clusters; more switching is predicted for less strongly held attitudes. On a fourth indicator, respondents were almost never queried about their level of interest on the topics asked here, but as an indirect (percentage-level) measure of respondent interest, the timeliest Gallup Poll reading of the public’s “most important problem” concerns was used; more switching is predicted on issues of little public interest.
The third predictor for switching is the counterargument question’s format and content. Three types of arguments were measured (each coded 1, 0) on whether the counterargument asked about a political party or a highly visible political figure, typically a president; health, death, illness, or disease; or a financial cost or expense. If the counterargument question explicitly reminded a respondent of his or her prior answer, the counterargument is coded 1, otherwise 0. The number of counterarguments, if any, previously asked of a respondent within a cluster is a ratio-level variable. Here no past literature exists, and hypotheses are only offered as plausible. If a respondent was previously asked other counterarguments, higher switching rates were predicted. For the other measures here, higher switching rates are predicted if the variable was coded 1, rather than 0.
The fourth predictor for switching involves three survey artifacts. If the prior question provided an explicit “don’t know” option, the counterargument is coded as 1, otherwise 0. As a caveat, modern polling seldom offers an explicit “don’t know” option and only five prior questions did so. If the counterargument does not immediately follow the prior question, the counterargument is coded 1, otherwise 0. If the prior question was near the beginning of the survey (within the first five questions), the counterargument is coded 1, otherwise 0. Explicit “don’t know” options, lags between the prior question and the counterargument, and prior questions placed earlier in the survey are all predicted to lead to higher switching rates.
House effects and mode effects, the fifth and sixth predictors tested, have apparently never been examined for counterarguments. However, since these counterarguments were asked by many different pollsters and by three different modes (in-person, telephone, and online), both effects can be tested. Although house effects are sometimes identified in surveys (see e.g., (Flores 2015, 587–590; Hillygus 2011; Jackman 2005)), the reasons for house effects are often difficult to identify and, when identified, result from the practical choices made in administering surveys including question wording and format and sampling design. House effects are below reported for CBS/New York Times, Gallup, and Harris; the remaining surveys comprise the control. No predictions are offered for house effects.
For mode effects, the sixth predictor, a common but not universal finding is that, compared to face-to-face or live telephone interviews, online surveys have a lower quality of responses (i.e., more “satisficing”) but are also cheaper and less affected by social desirability (Atkeson, Adams, and Alvarez 2014; Bowyer and Rogowski 2015; Ha and Soulakova 2018; Lind et al. 2013; Kreuter, Presser, and Tourangeau 2008). Assuming that switching answers are a form of socially undesirable behavior, higher switching rates are predicted for online surveys; lower switching rates are predicted for face-to-face interviews; live telephone interviews are the control group. The multivariate ordinary least-squares regression models, just below, include dummy variables for common pollsters and for face-to-face and for online surveys.
Table 1 includes a multivariate ordinary least-squares regression model reporting the findings. The first data column includes the full model; the second data column reports a reduced model. Unstandardized (B) coefficients, standard errors, and significance levels are reported for each model.
As Table 1 reports, several different types of variables significantly predict switching rates. As expected, significantly lower switching rates occur for questions about higher-profile or more familiar topics such as military conflicts and war or retirement and social security. Also, as expected, higher switching rates occur if a respondent initially favored the prior question, if the respondent had previously been asked other counterarguments, or for an online mode. Contrary to expectations, lower switching rates occur if the prior question was asked early during the survey, possibly because respondents are more engaged and willing to switch as a survey progresses. The results are robust. All the statistically-significant predictors in the full regression model remain in a more parsimonious model that explains nearly as much variance with fewer predictors.
As Table 1’s reduced model suggests, counterargument switching rates are quite variable. As an example, if the topic is not widely familiar, if the respondent favored a prior question that was asked late in an online survey, and if three counterarguments were previously asked, the predicted switching rate is 60%. If the topic was a military conflict, if the respondent previously opposed a prior question that was asked early in a non-online survey, and if no counterargument were previously asked, the predicted switching rate is only 21%.
Results are also robust if respondents’ strength-of-attitude is considered for the subset (111 of 287) of counterarguments in which respondents were asked if they “strongly” or “somewhat” gave their answer to the prior question. Here all the significant predictors remain so except for the question on military conflicts and war and for the CBS/New York Times house effects; as expected, initially strongly held attitudes had significantly lower switching rates (at the .1 level). This suggests that the CBS/New York Times house effect is partly an artifact of question wording. For this reduced model the predicted switching rate (with standard errors and significance levels) is: 36.98 (5.68***) constant - 9.22 (4.51*) if a question on retirement and social security + 10.22 (2.60***) if favored the prior question + 2.68 (1.06**) the number of prior counterarguments asked + 14.17 (4.93**) if an online survey (R-2 = .34; Adjusted R-2 = .30; standard error of estimate = .13.05).
Conclusion
This meta-analysis offers two important findings. First, counterarguments frequently lead survey respondents to switch their answers and do so across many different survey situations. Past findings for political tolerance and racial attitudes are not atypical. Across a wide range of issues, an average of 38% of respondents switched their initial positions. This figure provides a useful baseline by which to assess if switching rates are unusually high, only average, or low, and an empirically-based standard by which to judge which counterargument questions are “strong” versus “weak.”
Second, although switching is frequent, the highest switching rates occur only under a few conditions, some of them previously unexamined. Switching answers to the opposing position, don’t know, or refused is more common when the prior question is placed later in the survey; if the respondent favored the prior question; or if other counterarguments were already asked of the respondent. Familiar topics beget less switching. House and mode effects also appear. Many seemingly plausible explanations for switching, however, do not significantly affect switching rates, arguably because they are not substantively important, because their effect is collinear with more powerful predictors, or because the number of clusters and counterarguments examined is relatively small. These findings usefully extend past studies that examined respondents’ demographic or partisan traits, but not switching rates across different issues, different persuasive arguments, survey artifacts, or mode or house effects. Pollsters, their clients, scholars, and analysts alike may want to consider these findings when evaluating the power of counterarguments by the magnitude of switching rates.
Finally, this study has some limitations. All these counterarguments are from an American setting. All tap attitudes on public policies. None tap switching rates for the personal qualities or policy positions of candidates or ballot issues during election campaigns, for commercial products or services, or for specific political leaders or institutions. Since survey data files were often unavailable, limited, or differently coded, it was not possible to compare the results here to that for respondents’ demographics or to test for still other variables that may be linked to switching rates such as respondents’ happy/sad mood or attentiveness, or the communicator’s prestige or likeability. Further experimental research can usefully examine these issues.
Acknowledgment
An earlier version of this research was presented at the Western Political Science Association 2018 annual conference where the panel chair, Andrew Flores, provided helpful comments.