The 2013 Rutgers-Abt SRBI New Jersey Hurricane Impact Survey collected data from 1,751 telephone respondents via a random-digit-dial probability sample of non-institutionalized adults living in New Jersey (1,138 landline and 613 cell phone). The survey was fielded from February 15 to March 27, 2013, about three-and-a-half months after Superstorm Sandy hit New Jersey. Administered in English and Spanish (3.5 percent of all interviews), the sampling design was a modified split ballot with all respondents going through a screening and preliminary set of questions, followed by a randomized sample split for the bulk of the content of questionnaire; near the conclusion of the survey, the sample was rejoined for the demography section.
We observed a clear inadvertent priming effect in the distribution of party identification based on which version of the questionnaire the respondent received. This was true whether we assessed major party identification only or – since New Jersey permits registration as “Unaffiliated” – in various configurations including self-reported “Independent” on its own, as well as in combination with the self-reported voluntary answer, “no preference.” The practical lesson suggests that under a split-sample design priming effects analyses should be conducted. The remainder of this note demonstrates a simple and effective approach to conduct that analysis, as well as to treat the priming effect as an inadvertent survey experiment.
Priming effects are generated when prior items in a questionnaire affect the answer value distribution of subsequent items (Van de Walle and Van Ryzin 2011; citing Tourangeau, Rips, and Rasinski 2000). More broadly, something about questionnaire design “makes a concept more likely to come to mind and therefor increases the impact of that concept on other cognitive processes” (Cassino and Erisen 2010, 375; for a more extensive discussion of priming generally, see Weiner, MacKinnon, and Greenberg 2013, 192–193).
Our questionnaire was split such that one-half of the sample received an “experiential” version collecting data focusing on the respondent’s personal and household experiences during and immediately following the storm. The other half received an “economic” version focusing on the budgetary and regulatory aspects of New Jersey’s recovery from the Superstorm’s extensive damage to residences, commercial buildings, and public infrastructure.
The “experiential” version activated recollections of the storm (and, if applicable, evacuation) experience, including concerns for the safety of life and property. The “economic” version activated current assessments and future projections of taxation, regulation, limits on development, specifically invoking government as a stakeholder and actor. We did not, however, anticipate that these primes would affect respondents’ self-reports of their party identification.
The party identification question we used was the most basic National Election Study format: “Generally speaking, do you usually think of yourself as a Republican, a Democrat, an Independent, or what?” (American National Election Studies 2010). Precodes included Republican, Democrat, independent, while voluntary responses of “Tea Party,” “Green Party,” “other/specify,” “no preference,” “don’t know,” and “refused” were permitted. In and around 2013, the distribution of party identification in New Jersey was approximately 34 percent Republican and 46 percent Democrat, including leaners (Gallup 2014).
From the full sample of 1,751, we first analyzed the 939 respondents who reported identification with one of the two major parties. Since the split sampling was randomly assigned, the distribution across the by-version subsamples of Republican and Democratic Party identification should have been random and, generally speaking, should have aligned with the Gallup estimates. Table 1, however, tells a different story. Fifty-five percent of respondents exposed to the economic version of the questionnaire identified as Republicans, 21 percentage points higher than the Gallup statewide estimate and nearly 10 percentage points (or 22 percent) greater than those exposed to the experiential questionnaire.
The Democrats’ row in Table 1 is even more interesting: Respondents exposed to the economic issues version of the questionnaire self-reported Democratic Party identification at nearly spot-on the Gallup statewide estimate of 46 percent. The inference, of course, is that the Gallup measure is likely a solid indicator of Democratic Party identification, i.e., that it is of sufficient permanence to not respond to the priming effect of the economic issues version of the survey. It seems, in New Jersey at least, that self-reported Republican Party identification is either more malleable generally, or more subject to activation upon priming.
However, when exposed to the adverse weather experiential questionnaire versions, respondents self-reported as Democrats at nearly 9 percentage points higher, or about 17 percent greater than the Gallup estimate, than they did after exposure to the economic questionnaire. On the face of it, then, it seems that the degree of self-reported Republican Party identification in New Jersey can be artificially inflated via priming through exposure to economic and regulatory concerns, which invoke the power of the state. The companion self-reported Democratic Party identification is less subject to being diluted or strengthened through such priming, and is less related to the presence or absence of the state as an actor in the context of the Superstorm Sandy experience. Democratic Party identification is, however, apparently subject to empathetic priming, here through probes that explore the suffering of those personally and directly affected by Superstorm Sandy.
We then ran the same analysis this time including all three political identification precodes, i.e., “Independent” in addition to “Republican” and “Democrat.” New Jersey is somewhat unique in permitting registration as “Unaffiliated,” which we take as the functional equivalent of political party independence. This opportunity for the rejection of partisanship, while remaining in the partisan game, generates a statewide political culture under which the non-aligned (i.e., non-identifiers) are treated as identifiers of a sort when assessing electoral outcomes.
With the functional equivalent of registered Independents included the number of cases under analysis increased by 576, to 1,515, while still operating under the theoretical expectation that the distribution of party identification and registered independent status should be random across all three groups. Table 2 shows the added results for independents who show an approximately 6 percentage point (i.e., 13 percent) difference.
We assessed one last crosstabulation, here including as a substantive category 119 respondents who reported “no preference” in response to the party identification probe. Adding this group in brought the number of cases under analysis up to 1,634; these expanded results are shown in Table 3.
In light of this now full picture, apparently natural groupings emerge: (1) those with “no preference” seem to respond to the experiential priming in a way very similar to Democrats; while, (2) Independents, however, seem to run in closer parallel to Republicans. This seemed, to us, to be something worth investigating.
An Inadvertent, If not Natural, Survey Experiment
We were interested in learning if respondents primed by a questionnaire exploring economic and regulatory issues were more likely to self-report as Republicans, which respondents were most susceptible to the prime, and, in which direction? On the face of it, it seems they are drawn from the pool that would otherwise have self-reported as “Independent.” The same pattern holds for Democrats, except there the source of the inflation is from those who it seems would otherwise have responded “no preference.” To better explore this effect of economic issue priming on self-reported partisan affiliation, we constructed one multinomial logit model. The predictors are (1) the version of the questionnaire to which the respondent was exposed; (2) education; and (3) age; the last two entered in intervals.
The model was run twice, first with the three categories Republican, Democrat, and “no preference,” compared against the base category “Independent,” then with Republican, Democrat, and Independent, compared against the base category “no preference.” By examining these outcomes –where only the comparison base outcome varied between the model runs – we can get insight into the direct effect of the sole essential independent variable, i.e., economic issue priming, on self-reported party identification.
In particular, we show how such economic issue priming enhances the likelihood that an otherwise self-reported Independent will report Republican Party identification, and, in turn, inflate the reported proportion of Republican Party identifiers in the sample. We report the substantive outcomes in terms of relative risk ratios, which report the risk of occurrence of a characteristic in the group of interest, relative to the base group.
As shown in Table 4, the multinomial logit model performs as expected: When compared to Independents, Democrats are only about 28 percent as likely to be affected (0.72, p=0.005) by economic priming. Under the same comparison, Republicans are unaffected (1.07, p=0.631), indicating a similarity in response to Independents.
When compared to those with no preference, Republicans are about 47 percent more likely to be affected by economic priming (1.47, p=0.077), while Democrats are unaffected (0.99, p=0.953) indicating a similarity in response to the group with no preference.
Moreover, we conducted a postestimation likelihood ratio test comparing the full model with the variable controlling for questionnaire version included with the nested model omitting that variable. The outcome of that test [LR χ2 (3)=12.32; p=0.0064], rejects the null hypothesis that questionnaire version (here, with economic issues) does not affect the self-reporting of partisan identification.
Finally, we note two limitations of this research. First, our measure of party identification is limited. Given the circumstances in which we found ourselves, we suspect we could have learned more with a five- or seven-point scale to capturing party identification. Second, our multinomial model is no by no means a full exploration of the determination of party identification. Rather, it is intended solely to confirm the suspected priming effect and, for that reason, only the most basic additional predictors – education and age – were included.
The implications for the interpretation of split-sample surveys, especially those with an economic questionnaire version, are clear: In most cases, the demographic component of a survey is at the end. Survey researchers should be cautious as to the impact content will have on self-reports of party identification. In interpreting such findings, researchers should recognize that inferences on the impact of priming on self-reported party identification must be considered in light of state election law. New Jersey’s unique allowance that “independents” can register as “Unaffiliated” voters may, in the survey setting, generate inconsistent de jure and de facto understandings of the word “independent.”
Grateful acknowledgement for direct and indirect funding goes to James W. Hughes, Dean of the Edward J. Bloustein School of Planning and Public Policy and Robert M. Goodman, Dean of the School of Biological and Environmental Sciences, both of Rutgers, The State University of New Jersey; the Rockefeller Brothers Fund; the Mineta National Transit Research Consortium, administered by Robert B. Noland of the Alan M. Voorhees Transportation Center at the Edward J. Bloustein School of Planning and Public Policy at Rutgers University; and Abt SRBI, administered by Mark Schulman and Chintan Turakhia. M. Patrick Simon and K. T. G. Weiner-Simon provided ongoing support and encouragement.
Calculated under the American Association for Public Opinion Research (AAPOR) RR3 approach, response rates were 18.6 percent for the landline sample and 17.0 percent for the cell sample, which yields a combined overall response rate of 17.8 percent. Cooperation rates, using the AAPOR COOP3 approach, were calculated at 35.9 percent for the landline sample and 33.2 percent for the cell phone sample, yielding a combined overall cooperation rate of 34.6 percent (American Association for Public Opinion Research 2015, 53–54).
Relative risk ratios are more intuitive than odds ratios. Under an odds ratio an analyst has to comprehend and explain what it means for one group to be more likely or less likely than another group to show a characteristic. Under relative risk ratios, one group is fixed as the comparison point. For an extensive discussion of the proper interpretation of these metrics, see Weiner et al. 2012, 189, footnotes 10 and 11.