Introduction
Survey research has long been concerned with order and framing effects in placing questions on a survey instrument. Randomization is often used to ensure data are unbiased in the aggregate by spreading any bias that stems from question order or framing evenly among a set of questions. This is particularly the case in batteries of questions on a similar topic that might impact each other. Although most researchers focus on the aggregate results, we show in this article that the individual randomization patterns reveal important information about how prior questions frame immediately following questions using a study on LGBTQ rights issues. An identical question was accidentally placed on the questionnaire twice, in two separate question batteries, and we were surprised to see approximately 10 percentage point differences in responses to the item between the two batteries.
Specifically, in one battery of questions, all about LGBTQ-specific rights, 38% of respondents said they favored (and 56% opposed) allowing small business owners to refuse service to gay and lesbian people due to the business owner’s religious beliefs. Later in the survey, in a battery of questions asking about refusing service to different groups of people, 30% of respondents said yes, businesses should be able to refuse service due to their religious beliefs (and 67% said no). Since both batteries were randomized, we were able to use the order in which questions were asked of the respondents to determine that framing effects operated differently in the two batteries. We believe this to be primarily due to the framing effect in the second battery, which appears to have prompted respondents to think about the question in terms of discrimination against other marginalized groups such as African Americans, Jews, and Muslims. While the two batteries used different response options, one using a four-part favorable/unfavorable scale and the other using a binary yes/no option, these structural differences did not significantly impact the results. We have concluded that analyzing randomizations as framing effects can provide important insights into the substantive issues in survey research.
Movement in Public Opinion on LGBTQ Rights and Service Refusals
Opinion on whether businesses should be able to refuse service to lesbian and gay people on the basis of religious beliefs has shifted over the last few years as the issue has received considerable attention in court cases and in the media. Over the last three years, the Public Religion Research Institute’s (PRRI’s) American Values Atlas has identified a significant decline in opposition to religiously based refusals to serve gay and lesbian people. Opposition rose slightly between 2015 (59%) and 2016 (61%) but has since dipped slightly each year (60% in 2017, 57% in 2018, and 56% in 2019).[1]
The case which directly relates to the question of religious service refusals (Masterpiece Cakeshop v. Colorado Civil Rights Commission) had a complex trajectory over the time of these surveys and plenty of publicity throughout 2015–2018. In 2013, an administrative law judge sided against religiously based service refusals in the original case (Craig v. Masterpiece Cakeshop). In the subsequent appeal, the Colorado appeals court sided against religiously based service refusals in August 2015, during the first year PRRI had the question on the American Values Atlas. The Supreme Court granted certiorari in mid-2017. Interestingly, 2016 saw a shift in public opinion in the same direction of the appeals court decision and was the only year in which opposition to religiously based service refusals increased (from 59% in 2015 to 61% in 2016). In 2017 the trend was mostly stable but did decrease one percentage point to 60%.
On June 4, 2018, the Supreme Court handed down their decision, a narrow ruling that sided with Masterpiece Cakeshop on the grounds that the Colorado Civil Rights Commission failed to act with religious neutrality. Although the court did not rule on the question of whether such refusals are permissible, the nuance of the decision was often lost in coverage to the fact that it favored Masterpiece Cakeshop. In 2018, the PRRI data showed that opposition to religiously based service refusals slid to 57%, and in 2019 fell a bit further to 56%. It is unknown how much impact court decisions had on public opinion or whether opinion change was due to people aligning themselves with the decisions or decreased salience of the issue after it had been decided, but it is notable that opinion tracked slightly in the direction of the decisions.
However, none of that explains the 8–10 percentage point discrepancy that is evident in the single survey from April 2019.[2] In the context of the American Values Atlas, the service refusals question is asked in a brief three question favor/oppose battery consisting of that question, along with questions about same sex marriage and anti-discrimination laws to protect LGBTQ people. The first battery in which the service refusals question was asked in the April survey consisted of these same three questions, plus four more favor/oppose questions about transgender service in the military, “bathroom bills” requiring transgender people to use the bathroom of their sex assigned at birth, support or opposition to allowing licensed professionals such as doctors, lawyers, teachers, and social workers to refuse to provide services to particular groups of people, if doing so violates their religious beliefs, and support or opposition to allowing adoption agencies to refuse to adopt children to gay and lesbian parents. The results for the small business service refusals question aligned with results for all other weeks the question was asked in the 2019 American Values Atlas; there were no significant differences in the responses week to week throughout the year. Thus, the question remains why responses to the small business service refusals question varied significantly in the second battery of the April 2019 survey, and we look to the survey design and framing effects for the answer.
Framing Effects in Survey Research
How a respondent answers a question is known to vary based on survey design features. Though all respondents provide answers based on their own perspectives and attitudes, the wording of the question and the preceding questions can play a major role in shaping the responses by “framing” how they think about it (Chong and Druckman 2007; Zaller 1992; Zaller and Feldman 1992). Framing refers to “the process by which people develop a particular conceptualization of an issue or reorient their thinking about an issue” (Chong and Druckman 2007, p. 104). The assumption is that individuals place their attention on numerous considerations first about a specific topic to express an opinion. For example, individuals may place high importance on freedom of speech or public safety and depending on how a question is asked, opinions will become particularly salient (Chong and Druckman 2007). On this topic, Sniderman and Theriault (2004) found that small changes to the presentation of a question about allowing a hate group to hold a political rally influenced opinions. When they asked respondents by starting the question with “Given the importance of free speech,” 85% of respondents were in favor of allowing a hate group to hold a political rally, compared to 45% who were in favor when the question began with “Given the risk of violence.” In short, in framing effects, answers to later questions are influenced by considerations brought to mind by previous questions (Zaller and Feldman 1992).
Public opinion research also shows that framing can shape responses to questions about racial and other types of discrimination. Because of a social desirability bias, respondents tend to offer less racist and discriminatory opinions in surveys than they might hold personally (Sudman and Bradburn 1974). Further research shows that offering condemnations against anti-Black racism prior to asking respondents their views on such discrimination increases the level of opposition to this form of prejudice (Blanchard et al. 1994; Monteith, Deneen, and Tooman 1996). In one experimental study, researchers found support for social desirability in reporting anti-Semitic attitudes: When respondents were led to believe anti-Semitic attitudes were a group norm, the level of anti-Semitism expressed by respondents increased (Beyer and Krumpal 2010). In short, the framing may exacerbate the problems created by a social desirability bias (Beyer and Liebe 2015).
Framing Effects in LGBTQ Service Refusal Questions
This study uses a survey with a random sample of 1,100 adults living in the United States, including all 50 states plus the District of Columbia, with an oversample of those living in Texas. The additional Texas interviews are weighted so that the full dataset appropriately represents the U.S. population. Interviews were conducted in both Spanish and English April 9–20, 2019, by telephone to landlines (40%) and cell phones (60%) using professional interviewers.[3]
As mentioned previously, the survey asked the same question twice. It first asked, “Do you strongly favor, favor, oppose or strongly oppose allowing a small business owner in your state to refuse to provide products or services to gay or lesbian people, if doing so violates their religious beliefs?” This question was part of a randomized battery of different questions—all on the topic of LGBTQ issues—and 38% of respondents reported favoring allowing such service refusals. Respondents’ views on religiously based service refusals of gay and lesbian people do not display much variance depending on question order in this battery. Around four in ten Americans favor religiously based service refusals of gay and lesbian customers regardless of whether it is asked first (42%) or whether the first question was same-sex marriage (40% support religiously based service refusals), nondiscrimination protections (40% support religiously based service refusals), transgender service in the military (42% support religiously based service refusals), service refusals of gay and lesbian people by licensed professionals (37% support religiously based service refusals) or allowing adoption agencies to refuse to consider gay and lesbian parents (40% support religiously based service refusals) are asked first. Only one question has a significant effect: When the question about bathroom bills is asked first, about three in ten (27%) say they favor religiously based service refusals.[4]
Later in the survey, respondents were asked the exact same question about religiously based service refusals and provided response options of yes or no.[5] This battery asked the same question about different groups, including African Americans, Atheists, Jewish people, Muslims, and transgender people, in random order. This time, 30% of respondents said yes to the question regarding gay or lesbian people, a significantly lower number that indicated favoring it in the prior battery. However, when the question about refusals of gay and lesbian people was asked first, four in ten (40%) respondents say yes, businesses should be able to refuse service on religious grounds, which is statistically indistinguishable from the 42% who favor religiously based service refusals when it is asked first in the first battery. The topline difference is a result of support for refusing service to gay and lesbian people dropping substantially when respondents are asked first about refusing service to African Americans (31%), transgender people (29%), Muslims (24%), Jews (28%), and Atheists (25%).[6] Given the discrepancy in the responses, clearly some respondents are answering the same question differently depending on its placement in the battery. The next section analyzes what explains those different views—is it the question order in itself, or respondent characteristics?
Why Do Respondents Answer Differently?
When comparing responses to both questions, we find that the majority of Americans respond consistently. While most Americans (70%) offer consistent responses—that is, 21% favor or agree allowing to refuse services to gay and lesbian people, compared with 49% who oppose or disagree to this policy—nearly three in ten (29%) responded inconsistently to the two variations of the same question. We posit that these inconsistent views are also a result of the order in which the questions were asked and the theme as framed by previous questions.
To test our hypotheses, we estimate a logistic regression model (Table 3 in the Appendix) where the dependent variable captures inconsistent responses to the first and second question. We convert the first question into a dichotomous variable, where respondents who strongly favor or favor service refusals for gay and lesbian people are coded as 1 and those who strongly oppose or oppose are coded as 0. Then, we collapsed these responses with those who answered “yes” and “no” to the second question. Responses coded as 1 denote “inconsistent views” and as 0 denote “consistent views.” In addition, two key independent variables are included to measure framing effects. The first allows us to evaluate the placement of the lesbian and gay religiously based service refusal question in the first battery: 1 indicating “question heard first,” 2 “question heard second,” 3 “question heard third,” and so forth. We follow the same procedure for the lesbian and gay religiously based service refusal question in the second battery to create the second independent variable of interest. We control for personal demographic and socioeconomic characteristics such as gender, age, race, education, region of residency, religious tradition as well as political affiliation and ideology. Importantly, none of those characteristics are significant predictors of inconsistent views, suggesting that inconsistent views are more related to the ordering of the questions than to respondents’ individual characteristics.
The first rows in the model in Table 3 in the Appendix show that ordering and placement of questions influence inconsistent responses. To gain a better understanding of these results, Figure 1 shows the mean predicted probability of having inconsistent views. The difference in the mean predicted probability of having inconsistent views decreased by .16 when moving from being placed first (.41) in the first battery of questions to being placed last (.25), suggesting that respondents may be unfamiliar with the types of questions being asked at the beginning of a battery, and as they get more familiar, they are more consistent in their responses. The results are statistically significant at p < 0.05.
Interestingly, when we evaluate our second independent variable related to the second set of questions, the results in Table 3 in Appendix show only one statistically significant result: When the question about refusing service to lesbian and gay people is placed second, respondents are less likely to exhibit inconsistent views compared to when the item is first. All the other placements of the question in the second battery did not yield statistically significant results. As we had hypothesized that the second battery associated refusals to serve lesbian and gay people with civil rights issues due to the inclusion of other marginalized groups such as African Americans and Jews in the battery, this lack of significance was a bit surprising. However, the dependent variable includes inconsistent views in both directions (moving from support to oppose and oppose to support between the two batteries). In order to determine whether there is a directional shift, we analyze the direction of inconsistent views in Table 1.[7] Again, none of the respondent characteristic variables matter. We do, however, find that in almost all cases, if the lesbian and gay item is anywhere except first, respondents are significantly less likely to move from oppose to support compared with the lesbian and gay item appearing first. That is consistent with the idea that there is a framing effect occurring in the second battery.
The results for moving from support to oppose regarding refusing service to lesbian and gay people in the second battery are more mixed, but all in the expected directions. If the service refusals item appears anywhere except first in the first battery, respondents are less likely to move from support to oppose. If the lesbian and gay group is fourth in the second battery, respondents are more likely to move from support to oppose. Although the other order options in the second battery are not statistically significant, if the variable is treated as a continuous ordering from 1 to 6 (rather than categorical as it is here), there is a statistically significant pattern of becoming less likely to move to oppose from support as the lesbian and gay group is asked later in the battery.
Conclusion
The evidence presented here shows that changes in how respondents answered these two nearly identical questions were likely the result of the question order and not differences in response options or respondent demographics. Religiously based service refusals of gay and lesbian people may be more likely to be evaluated within the frame of LGBTQ rights or religious liberty, but when refusals of gay and lesbian people are equated with refusing service to other minority groups, respondents might be more likely to view religiously based service refusals of gay and lesbian people as discriminatory in nature.
The question, then, becomes: What is the most realistic estimate of public opinion on this topic? Or, more specifically, do we think that 38% or 30% is the better estimate of the proportion of Americans who support religiously based refusals to serve gay and lesbian people? Clearly, this is a question for which context matters, at least for a portion of Americans. Whether the issue is indeed one of civil rights, or business rights, or religious liberty is part of the contested political terrain. We suggest that the closeness of the estimates when asked prior to framing by other questions might be the best unbiased estimate, since we have demonstrated that order changes the results. However, there is a powerful messaging finding here—framing religiously based service refusals with marginalized groups, rather than isolated as an LGBTQ issue, is likely to generate more opposition to such appeals based on religious liberty claims.
More broadly, these findings provide an example of the type of experimental evidence buried within every randomization that survey researchers put into their instruments. Randomization ensures that any biases are averaged out in the topline, but the effects of the surrounding questions are still present in the individual responses. We can learn valuable lessons for survey design, messaging, and how respondents answer by digging into these patterns.
Corresponding author
Natalie Jackson, PRRI, njackson@prri.org, 202-238-9424
These differences are statistically significant at the p < 0.05 level due to large sample sizes. The PRRI American Values Atlas for 2015–2019 consists of at least 40,000 cases per year. For more information, see http://ava.prri.org/
The April 2019 survey is one week of the American Values Atlas but contained many more questions than the overall 40-week project.
For a more detailed description of the sampling strategy and survey items, please see Appendices B and C.
This conclusion is confirmed in a logistic regression model (Table 1 in Appendix).
The two question batteries we use in this paper also differed in the number of options available in the response scale. The first used a four-point support-oppose scale with measures for intensity, while the second used a simple yes-no dichotomy. However, there is little reason to think this alone caused the discrepancy. Various studies offer different recommendations in the number of optimal response options, with some criticizing the use of a two-point agree or disagree approach as not capturing feelings of intensity (DeVellis 2003; Garner 1960). Some studies recommend a five- or seven-point scale as most optimal (Dawes 2008; Dolnicar and Leisch 2012; Revilla, Saris, and Krosnick 2013). Despite these disagreements on what number of options is optimal, studies mostly agree that providing more intensity levels does not substantively alter directional answers to a question.
Again, these results are confirmed in a logistic regression model (Table 2 in Appendix), except for the effect of African Americans, which is not statistically significant.
The full model is presented in Appendix Table 4.