Introduction
Response order effects are changes in answers to close-ended questions that arise by varying the order of the response options (Krosnick and Alwin 1987). Two types of response order effects have been documented in previous studies: primacy effects and recency effects (Bishop and Smith 2001; Holbrook et al. 2007; Knauper 1999). Primacy effects, when response options presented earlier in a list of options are selected more often than ones at the end, have been typically observed in paper and pencil self-administered surveys (Krosnick and Alwin 1987). Recency effects, where response options presented later in a list are selected more often, primarily occur in surveys where questions are presented orally (Krosnick and Alwin 1987). Response order effects have been documented with adult respondents, but few studies have examined these effects with children or adolescents (Fuchs 2005). It is believed that no studies on response order effects have been conducted on a tobacco-specific survey.
Questionnaires often include non-substantive response options, such as “no opinion” or “don’t know,” to allow respondents who have no true opinions to select these options. People with lower education are more likely to be attracted to the “no opinion” response option (Krosnick et al. 2002). As a result, it is recommended that in children’s surveys, researchers should minimize the use of non-substantive response categories (Bell 2007). However, in circumstances where it is not possible to avoid using non-substantive response options, no research has shown if the order of these response options has any effect on responses.
We assess whether primacy and/or recency effects occur in a self-administered tobacco-related questionnaire among youth and will determine the effects of reversing the order of response options and specifically changing the position of a non-applicable response category.
This study is one of very few studies to examine the effect of response order in children’s responses, and it is the first to investigate the impact of moving a non-applicable response category from first to last position. To the best of our knowledge, it is also the first to document the presence of response order effects in a tobacco survey and differentiate the effects for tobacco users verses nonusers.
Methods
Study population and sampling methods
The Youth Tobacco Survey (YTS) is conducted as a collaboration between the 50 states and the Centers for Disease Control and Prevention’s Office on Smoking and Health. The YTS is administered to students in grades 6 through 121 and provides insight into the effectiveness of state tobacco control programs and measures the influence of pro-tobacco marketing and advertising on young people. The analysis combined YTS split-ballot surveys in Virginia and Mississippi. The Virginia survey was conducted between October 2007 and March 2008, and the Mississippi survey between January and August 2008. Regular public school students in 6th through 12th grade were eligible in Mississippi. Regular, alternative, or charter public school students were eligible in Virginia.[1]
Two-stage sample selection was used. In the first stage, schools were selected with probability proportional to enrollment. In Virginia, 50 high schools and 50 middle schools were sampled, and in Mississippi, 60 high schools and 60 middle schools were sampled. In the second stage, up to five classes in each school were selected, and the selected classes were randomly assigned to receive either the standard or test questionnaire.
All students in each selected class were eligible. From both states, 15,008 students were sampled, and 11,521 students participated. Forty-nine percent of the students were female, and 51 percent were male. The percentage of 11 year olds or younger, 12 year olds, 13 year olds, 14 year olds, 15 year olds, 16 year olds, 17 year olds, and 18 year olds or older were 7.4 percent, 13.6 percent, 14.4 percent, 15.9 percent, 15.2 percent, 13.7 percent, 14.8 percent, and 4.8 percent, respectively. Forty-three percent of students were in middle school. Respectively, white and black students made up 55.4 percent and 33.3 percent of the population, and 5.7 percent were Hispanic and 5.6 percent were from other racial or ethnic groups.
School response rates were calculated by dividing the number of participating schools by the number of selected schools. Student response rates were calculated by dividing the number of participating students by the number of eligible students. The overall response rate is the product of these two rates. Overall response rates for the standard version were higher than the test version for high schools and middle schools in Virginia and Mississippi. The overall response rates for the standard vs. test versions in Virginia high schools, Virginia middle schools, Mississippi high schools, and Mississippi middle schools were 42.9 percent vs. 36.7 percent, 70.8 percent vs. 68.2 percent, 59.0 percent vs. 57.6 percent, and 65.3 percent vs. 61.8 percent, respectively.
Types of tests
Within each state, middle and high school questionnaires were identical. The standard questionnaire contained 81 questions and the test questionnaire 82 test questions. The additional question in the test version was a result of splitting a question about race into two questions, one of which asked specifically about Hispanic ethnicity. Thirty-one questions involved primacy and recency tests (but not all 31 questions were asked in both states).[2] We grouped these 31 questions into two categories based on the type of change to the response options:
- Order-only tests:
In nine questions, we reversed response options in the test version. Two questions had three response options, six questions had four response options, and one question had five response options. Since we were testing for primacy or recency effects in these questions, we report changes in the distribution of responses from moving a response option from first to last position.
- Non-applicable “NA” response option order tests:
The YTS questionnaire does not allow skip patterns. Non-tobacco users are required to answer questions about tobacco use, and a non-applicable (NA) response category was used to identify non-tobacco users. It also is used to identify other people for whom the question is non-applicable (such as young people who have not used the Internet and who must answer a question about Internet use). In 18 questions, we compared a standard version where the NA category was listed first with a test version where it was listed last. In four questions, the standard version NA category was listed last and the test version NA category first.
Statistical methods
We calculated response estimates for each question adjusting for sample design effects. Raw percentage differences were calculated by subtracting standard-version percentages from test version percentages for each response category, and we used these percentage differences to determine primacy or recency effects. For example, in questions comparing a response option category that was first in the standard version and last in the test version, a negative percentage difference indicated a primacy effect while a positive difference indicated a recency effect. Rao-Scott and Wald Chi-Square tests were used to test statistical significance (_p_≤0.05). To allow for multiple comparisons, we adjusted alpha levels using Bonferroni criteria (Sedgwick 2012).
For each question, we used generalized multinomial or logistic regression models to determine if primacy or recency effects existed after controlling for covariates. The dependent variable in each model was the students’ responses to the question. The exposure was the questionnaire version (standard or test). We controlled for age; school level (middle, high school); state (Mississippi, Virginia); and whether the student had been taught about tobacco in school, because some of the questions we analyzed asked about the students beliefs and attitudes towards tobacco. In four questions, changes were made between the standard and test version only in Mississippi, while in one other question, changes were only in Virginia’s questionnaire. For these five questions, multinomial models only included data from the state where the changes were made. Predicted marginals were calculated for the questionnaire version variable (predicted marginals estimate the percentage of respondents for a selected group if everyone in the sample had been in that group). We calculated percentage differences of the predicted marginals, where the standard version predicted marginals were subtracted from the test-version predicted marginals.
Most of the questions that contained the NA tests had small sample sizes in many of the response categories. In those questions, the response options were collapsed, and the NA category was compared with all the other response options combined. In 12 NA questions about tobacco use, separate analyses were done for tobacco users and non-tobacco users. For the NA questions with a not applicable category of “I did not smoke cigarettes during the past 30 days,” we analyzed those who had previously answered that they had smoked cigarettes during the past 30 days (tobacco users) separately from those who answered that they had not smoked cigarettes in the past 30 days, and we did the same for the questions on cigar and smokeless tobacco use. We did not perform adjusted tests in the analyses of tobacco and non-tobacco users responses because of the insufficient sample sizes that resulted from stratifying by tobacco use.
Results
In the order-only tests, all nine of the unadjusted tests exhibited primacy effects, meaning that in all nine questions respondents were more likely to choose a response option when it was first in a list. However, only three of the nine questions (33 percent) had statistically significant differences in the distribution of responses. The percentage differences for the first response category ranged from 0.8 to 6.9 percentage points, meaning that up to 6.9 percent more students chose a response category when it was first in the standard version than when it was last in the test version. From the adjusted multinomial models, eight of the nine questions exhibited primacy effects, but most questions did not have statistically significant differences between the standard and test versions (i.e., only two were statistically significantly different). Predicted marginal percentage differences ranged from 1.2 to 7.7 percentage points (Table 1).
In both the unadjusted and adjusted non-applicable response option order tests, 96 percent of the questions (21 out of 22) showed primacy effects and one question had a recency effect. Three had statistically significant primacy effects in both the unadjusted tests and the adjusted models. In the unadjusted tests, shifts in the distribution of responses ranged from 0.4 to 6.3 percentage points, and in the adjusted tests, from 0.9 to 6.2 percentage points (Table 1).
When we analyzed tobacco users’ responses to 12 “NA” response option order tests among tobacco users, recency effects were present in nine of the questions and primacy effects occurred in three questions. Six of the 12 questions had significant differences; three of those with recency effects and all three questions with primacy effects had significant differences. Shifts in the distribution of responses ranged from 0.3 to 12.0 percentage points. When we analyzed non-tobacco users responses to these same questions, primacy effects were present in all but one question that had a recency effect, and 10 (83 percent) of questions had significant differences (nine primacy effects and one recency effect).
Discussion
Our results support the hypothesis that children are prone to primacy effects in self-administered questionnaires. Questions with a non-applicable response option that also is the first option are vulnerable to primacy effects. This finding is analogous to findings among adult respondents who tend to favor non-substantive options over substantive ones (Krosnick et al. 2002). In our study, students were more likely to select the non-applicable response option when it was first in a list of options as opposed to last. Also, although we did not measure the effect of each question’s complexity, the complexity may be important; for example, children may be more prone to primacy effects in questions with complex response options. Many of the questions with significant effects had several (up to 7) response options (Table 2), and previous studies have demonstrated that 7 or more response options decreases scale reliability, with 4 response options being the “optimal” number for children and adolescents (Borgers, Hox, and Sikkel 2004). Therefore, it is possible that there are significant differences in these questions because of the number of response options.
When we analyzed tobacco users’ responses to NA questions about tobacco use, primacy effects were observed for three questions, and recency effects were observed for three questions. We were interested in how tobacco users answered these particular questions, since tobacco users choosing the inappropriate category is an indication of inconsistency in their responses throughout the survey. We also analyzed non-tobacco users’ responses in the same tobacco use questions and found that non-tobacco users were more prone to primacy effects. This means that the findings for non-tobacco users did not differ much from the findings overall, which we would have expected since non-tobacco users make up the majority of the respondents for the NA category. Also, non-tobacco users need to select the NA category for many questions, and after doing so for several questions, perhaps some would assume that the survey does not apply to them, making them more likely to satisfice and select the NA option every time, especially when it is conveniently listed first.
Our study had several limitations. Our results were limited to students in just two states, and overall high school response rates in Virginia were very low (36.7 percent). Another limitation is that we did not use the same students as the comparison group by conducting both sets of surveys on the same students. Also, we made multiple comparisons, and even though we used Bonferroni corrections, it is possible that significant differences were due to chance.
On the basis of the findings, we believe that more research is needed to duplicate our results, especially in the area of the NA category where we found a discrepancy between tobacco and non-tobacco users.
Conclusion
In our YTS split-ballot experiment, we observed primarily primacy effects and a few recency effects (primarily among tobacco users) even after controlling for age, school level, state, and whether the student had been taught about tobacco in school. We also concluded that the position of a non-applicable response option does have an effect on responses, especially among non-tobacco users, and respondents will be prone to select this option more often when it is first as opposed to last in a list of options.
Consistent with the previous research, the findings from our study suggest that rotating the order of response options, so that different respondents receive different orders, may be beneficial for tobacco-related youth surveys. This approach could be viable for electronic data collection, as well as for a paper-and-pencil questionnaire such as the YTS, where researchers could provide two versions of the questionnaire in each school. To date, the majority of survey organizations do not rotate the order of response options (Holbrook et al. 2007).
Also consistent with general survey design principals, it is important to consider using skip patterns instead of a “not applicable” response option (Fisher 2000). In situations where skip patterns cannot be used, “not applicable” is best positioned as the last response option, rather than the first, so as to ensure that a greater percentage of respondents will select substantive options.
Author Note
The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention.
Acknowledgements
Jennifer Bombard, Peter Mariolis, Valerie J. Rock, Heather Ryan, Stacy Thorne
Supplemental Material
The requirements for determining eligible schools, classes, and students are stated in The Youth Tobacco Survey Handbook. From the beginning, it has been standard practice to treat all partial interviews as completes. Denial of parental permission, absence from school, and refusals to participate constitute student nonresponse. Refusals are the only reason for school nonresponse. Schools that had closed since the beginning of the survey were considered ineligible.
Not all 31 questions were asked in both states.