Survey mode has been associated with differing reports of smoking behavior among youth, with household telephone surveys generally yielding lower estimates of youth smoking rates than school-based surveys. Researchers assume the lower estimates from telephone surveys reflect underreporting due to youths’ concerns about parents or others overhearing their responses. School surveys lessen concerns about parents overhearing youths’ responses but exclude youth who have dropped out of school and underrepresented those who attend infrequently. As a result, the best methods for accurately measuring youth smoking behavior continue to be investigated (Fowler Jr., F.J. and Stringfellow 2001; Gfroerer, Wright, and Kopstein 1997).
For household telephone surveys, using interactive voice response (IVR) to allow youth to self-report has been shown to increase youth reports of smoking compared to interviewer administration (Currivan et al. 2004; Moskowitz 2004). Nevertheless, this research shows that a significant gap remains between youth smoking estimates from IVR household surveys and school surveys for the same population (Currivan et al. 2004).
The observed differences in estimates of youth smoking between household telephone surveys and school surveys raise the question of how disclosure risk might influence how youth answer smoking questions. If youth respondents are concerned about disclosure risk, can we manipulate the household telephone survey protocol through IVR to influence how youth think about the potential audience for their responses to smoking questions? The standard adult female voice used in many IVR applications may encourage youth to think about the risk of disclosure to adults and therefore discourage reporting smoking behavior or intentions. However, a youth voice may encourage respondents to think about disclosure to an audience of their peers and perhaps lead to increased reporting of smoking behavior or intentions. If youth respondents are not sufficiently concerned about disclosure or are not influenced by the voice type, no differences in reporting smoking behavior or intentions might be observed. In this paper, we present the results of an IVR experiment where youth respondents were randomly assigned to an adult or youth female voice to assess whether their reports of smoking behavior varied by voice type.
Background and Research Questions
Only a few research studies have assessed whether different IVR voices influence respondents. This research has focused on adults and asked a variety of sensitive questions, though none involved tobacco use. Overall, the literature gives little reason to suspect that voice type has a large or consistent effect on adults’ responses to sensitive questions (Couper, Singer, and Tourangeau 2004; Evans and Kortum 2010; Tourangeau, Couper, and Steiger 2003).
We did not find any published studies that directly assessed how IVR voices might influence youths’ survey responses during telephone interviews or audio computer-assisted self-interviewing (ACASI) voices for in-person interviews. The possibility exists that youth could respond differently to alternative IVR voices to a greater degree than adults if the perceived risk of revealing sensitive information is higher.
In our experiment, we assumed that youth respondents were thinking about disclosure risk in terms of potential audience. Thus, we expected youth receiving the adult female IVR voice to be thinking about the risk of parents or guardians learning their smoking behavior. Likewise, we expected youth receiving the youth female IVR voice to be thinking about the risk of siblings or friends learning their smoking behavior. These expectations are consistent with the “computers as social actors” viewpoint whereby youth would have to imagine the person behind the voice and think about how that person would react to their responses to questions on smoking behavior and intentions (Reeves and Nass 1997). The contrasting view, as discussed in Couper et al. (2004), would lead one to expect youth participants to respond to the two voices as similar computer applications that did not vary significantly in disclosure risk.
We were also interested to see if results varied by demographic subgroup, as they did for Currivan et al. (2004). Even though Currivan et al. (2004) studied the effect of IVR versus computer assisted telephone interviewing (CATI) by live interviewer and did not examine differences between IVR voices, their study focused on youth responses to sensitive smoking items. The study found that some demographic subgroups of youth responded to the experimental design differences more than others.
The data used for this study come from the Florida Youth Cohort Tobacco Study (FL YCS), sponsored by the Florida Department of Health. The FL YCS is a longitudinal telephone survey designed to track tobacco-related beliefs, attitudes and experiences of Florida youth aged 12–16. Baseline interviews were conducted in 2009 in English and are the basis for this research.
Florida households were sampled using list-assisted landline random digit dial (RDD) numbers, supplemented by directory-listed numbers to increase efficiency in reaching households with at least one eligible youth. Interviews were completed with 1,546 youth, though we present results for the 1,444 youth who provided sufficient data to be included in the final analyses. Survey data were weighted to be representative of Florida youth age 12–16 who lived in households covered by the sampling frame.
Gaining cooperation to conduct interviews with eligible youth involved obtaining both parental consent and youth assent. Consent from the parent/guardian was acquired before speaking to the eligible youth and obtaining assent. Youth were selected using the most recent birthday method when more than one eligible youth was identified in a household.
Youth were asked a series of demographic questions before being told they would be asked questions about their experiences with tobacco products through an automated phone system. Instructions were given on how to use the IVR system before youth were switched to it. Within the system, youth were randomly assigned to hear pre-recorded questions from either the adult or youth female voice. Respondents entered answers using the telephone keypad. Upon completion of the IVR module, youth were reconnected with a live interviewer to finish the study.
The overall response rate was 15.4 percent using American Association for Public Opinion Research (AAPOR) RR4. This rate was negatively impacted by our screening procedures, which required affirmative parental consent.
Table 1 presents a voice type comparison for lifetime and recent tobacco use behaviors among respondents in the IVR mode. Youth receiving the adult female voice were slightly more likely than those receiving the youth female voice to report that they had ever tried cigarette smoking. Conversely, youth receiving the youth female voice were slightly more likely to report that they had smoked one day or more in the past 30 days, the most sensitive question in the instrument. Neither of these differences were statistically significant at the conventional p<0.05 level, but the p-values did equal 0.10, suggesting marginal significance. While these might have represented meaningful differences between the two voices, we cannot be fully confident that true differences existed.
Table 2 shows outcomes for intentions to smoke cigarettes by voice type. Youth respondents receiving the adult female voice were found to be significantly more likely than those receiving the youth female voice to report that they would probably smoke anytime during the next year and if a best friend offered a cigarette (p<0.05).
We were also interested in seeing whether factors like age or gender were associated with observed differences in smoking behavior and intentions based on voice type. We found that younger youth aged 12–13 (8.9 percent) and female respondents (15.4 percent) reported significantly more often that they had ever tried cigarette smoking when they were asked questions by the adult female voice (p<0.05) versus the teen voice.
We found similar results when examining reports of intentions to smoke anytime during the next year. Younger youth aged 12–13 (7.1 percent) and female respondents (8.8 percent) again reported significantly more affirmative intentions to the adult female voice (p<0.05). Likewise, younger youth and females reported significantly more affirmative intentions to the adult female voice (7.6 percent, 7.1 percent, p<0.05) when reporting their intentions to smoke if a best friend offered it.
Following the “computers as social actors” paradigm (Reeves and Nass 1997), we assumed the perceived “audience” for youth responses could be salient and result in greater reporting of smoking behavior and intentions with the youth female voice. Instead, all differences in youth reports of smoking behavior or intentions that were statistically significant at the conventional p<0.05 level involved higher reports with the adult female voice. Voice type did matter for youth smoking reports, but not in the direction expected. The adult female voice – not the youth voice – elicited significantly higher reports of smoking intentions for the sample as a whole, and elicited higher reports for both intentions and past use for youths aged 12–13 and females.
Because voice type produced some differences in youths’ responses in this study, we recommend survey practitioners pretest and experiment with different voices when budget and time allow. In our study, we found youths who were either younger or female were most likely to report differently based on voice type. Currivan et al. (2004) found that female respondents were more likely than males to report smoking behavior in IVR mode compared to CATI mode, particularly those girls who believed their parents would strongly disapprove of their smoking. Combined with our current study, these findings indicate some youth subpopulations might be more sensitive to protocol differences when survey questions focus on sensitive topics. For youth surveys that cover sensitive topics such as illicit or illegal substance use, evaluating IVR voice types before administering the primary data collection might be useful to avoid this kind of bias.
If survey-specific experimentation with voice types is not feasible, we suggest practitioners continue to use the “standard” adult female voice typical of most IVR applications. Although our study could not determine whether this standard voice increases or decreases reporting bias, the measurement bias associated with this standard voice would be consistent with data from most existing surveys using this kind of IVR voice. The common bias produced by the standard voice could then be ignored as a source of differences in estimates when comparing results across these surveys.
Couper et al. (2004) investigated IVR voice type in relation to illicit substance use, but only with adults.