Introduction
Incomplete data in surveys is a major concern for researchers. Data incompleteness can be associated with undercoverage, unit nonresponse, and/or item nonresponse (Groves et al. 2009). This paper focuses on item nonresponse specifically to demographic items including income, race, gender, and age. Item nonresponse occurs when the respondents in the selected sample provide some, but not all, of the requested information (Groves et al. 2009). Item nonresponse can lead to bias and measurement errors. Thus, it is essential to reduce item nonresponse.
Factors that Could Explain Item Nonresponse
Researchers have reported predictors both of nonresponse in general and of income item nonresponse (Ferber 1966; Grønhaug, Gilly, and Enis 1988; Ralph 1984). For instance, Ferber (1966) and others (Ralph 1984; Riphahn and Serfling 2005) reported that item nonresponse was found to be higher for females compared to males and higher for people with low compared to high educational attainment. In addition, people who are self-employed, homeowners (Pleis and Dahlhamer 2004; Sousa-Poza and Henneberger 2000), in a lower social position, female, older (e.g., age 65 and over; Ralph 1984; Ross and Reynolds 1996; Schräpler 2003), and African American (Ross and Reynolds 1996) have demonstrated higher item nonresponse rates.
Researchers have suggested that cognitive and motivational factors contribute to item nonresponse including income items (Loosveldt et al. 1999a, b; Tourangeau, Rips, and Rasinski 2000). Beatty, Herrmann, et al. (2002) proposed three antecedents that influence item nonresponse in surveys: (1) amount of knowledge about the topic (cognitive), (2) perceptions about accuracy required to answer the question (cognitive), and (3) willingness to respond to the question (motivation). Similarly, Juster and Smith (1997) and Moore, Stinson, and Welniak, Jr. (1999) have a model that includes cognitive and motivational factors. However, Moore, Stinson, and Welniak, Jr. (1999) added interpretation and comprehension of questions to their model.
On the other hand, Singer, Hippler, and Schwarz (1992) suggested that item nonresponse could be due to concerns about confidentiality. Questions regarding income have been reported to be difficult to answer or thought as an invasion of privacy (Singer, Mathiowetz, and Couper 1993; Tourangeau and Yan 2007). Other researchers have suggested that item nonresponse could be due to the survey design (e.g., response format, poor question wording; Groves et al. 2009).
Strategies to Improve Item Nonresponse
Efforts have been made to decrease item nonresponse by addressing cognitive, motivational, and design issues. For example, researchers have used a series of bracketed income questions, known as ‘unfolding bracket’ technique (Heeringa, Hill, and Howell 1993; Juster and Smith 1997). Unfolding bracket is used after the respondent has refused to answer or said “don’t know” to an open-ended income question asking for their exact income amount. This technique consists of a series of closed-ended questions with specific income ranges, where researchers ask respondents to place their income within the specific range. Unfolding bracket has been noted to reduce missing financial data by 50% or more (Heeringa, Hill, and Howell 1993; Juster and Smith 1997) and income nonresponse by 50% (Battaglia et al. 2002; McGrath 2005). However, evidence suggests that the unfolding bracket technique is only effective for addressing cognitive causes of income nonresponse and not motivational causes because the unfolding bracket technique reduces the cognitive burden by providing a range of income brackets. The technique does not involve persuading respondents to report income (Yan, Curtin, and Jans 2010).
Moore (2006) created three procedures to elicit responses about income. The first procedure was to use a flexible approach, where respondents can choose how they will report their incomes. The second procedure involved an expanded use of closed-ended range questions for use as follow up probes in the event of an initial nonresponse. The third procedure used dependent follow-up interviewing after a nonresponse event occurred. The dependent interviewing procedure is a follow up technique that interviewers use with respondents by saying, “I have recorded from last time that you received [dollars] from [income source] last February. Does that still sound about right?” (Moore 2006, 3). Moore (2006) found that there were reductions in item nonresponse with the second and third approaches.
Other solutions related to the placement of demographic questions in questionnaires. Some researchers contend that the best placement is at the end of the questionnaires (Dillman 2007; Fink, Bourque, and Fielder 2003; Jackson 2009; Whitley 1996). Stoutenbourgh (2008) argued that including the demographic items at the end would allow the primary survey questions to be answered when participants are most alert and invested in taking the assessment as well as to reserve the boring demographic questions until the end. In contrast, other researchers argued for placing demographic items at the beginning of survey interview to increase rapport (Babbie 2008). Despite these suggestions, there are mixed results for both placements of demographic questions. Teclaw, Price, and Osatuke (2012) reported that placing demographic questions at the beginning of a questionnaire lowers item nonresponse rates. In contrast, other researchers suggest placing demographic questions at the end of a questionnaire is better (Dillman 2007; Fink, Bourque, and Fielder 2003; Jackson 2009; Whitley 1996).
Many of the efforts used to reduce item nonresponse have been focused on survey design and cognition. There are fewer studies that focus on increasing motivation of respondents to report the demographic items. In addition, reported efforts have not considered the context of respondents including literacy and culture. Thus, we tested whether motivation to complete demographic survey items would increase among low income and low educated racial ethnic minorities after they are provided a meaningful verbal explanation for asking demographic questions. The explanation described the importance of providing demographic information and explained how demographic information could be used to improve care for people from communities like theirs or for populations with similar characteristics. We hypothesize that if people from the same background craft the explanatory message for people who are from similar communities, then it will increase such participants’ motivation to report the desired demographic items, thereby reducing item nonresponse.
Methods
This study was approved by the University of Wisconsin-Madison Institutional Review Board.
Overview of Study Design
We recently described in detail our evaluation of the impact of an established lay group, the Community Advisors on Research Design and Strategies (CARDS), on revising research materials and consent forms to improve recruitment and responsiveness (Bowers, Jacobson, and Krupp 2017; Jacobson, Bowers, and Krupp 2016). In brief, the CARDS is a community group of generally low income, high unemployment, and primarily ethnic and racial minority individuals (Kaiser, Thomas, and Bowers 2017). Using materials from a previously completed study of heart failure, participants were asked to assess their likelihood of responding positively to recruiting materials, consenting to participate in the study and responding accurately to demographic items if they were actually enrolled in the study. This paper addresses participant responses to the demographic items included in that study.
Design and Sample
A cross-sectional, randomized comparison study was designed to understand whether providing a meaningful explanation to participants would increase participants’ motivation to complete demographic survey items, thereby reducing item nonresponse for demographic items. Prior to collecting data, the researchers met with CARDS to draft a brief explanation of why responding to demographic items is important. The explanation is different from how researchers more often approach demographic items, providing either no explanation or a general explanation that the research will improve care for a particular condition, or is required by the sponsor, but not making the connection to how people like them might benefit from answering those items.
Recruitment was done by a long standing employee of the organization, who lived in the local community. A convenience sample of 99 adult participants was recruited from food banks and other programs (e.g., parenting and childcare programs, women’s support groups, and senior meal programs) at two community centers in low income, racially and ethnically diverse areas of the city. People were eligible to participate in the study if they were eligible for food bank support or use of other community center services targeted to low-income individuals.
Data Collection: Three white researchers from a local university collected data. The researchers met face-to-face with the eligible participants individually on several different days at the two community centers to conduct the study. To minimize contamination from social conversation about the study, researchers randomized the interview assignments based on day. That is, all participants on the same day, at the same center, were included in the same group. Groups were distinguished by either hearing the CARDS explanation of why participation in research is important (see Table 1) or hearing no explanation. The researchers chose to read to all the participants in group 2 because they wanted the verbal explanation to be delivered in the same way (standardized) and to ensure that literacy was not an issue.
Participants in each group were shown four demographic questions and then were read the multiple choice responses (see Table 2); “If this question appeared on a survey you had agreed to complete, would you: Answer the question correctly?, Answer the question incorrectly?, or leave the item blank?” Participants were only asked how they would respond to the question, not actually asked to respond to the items.
Analysis of Data
The answers to the questions about whether respondents anticipated that they would answer the demographic questions were compared between the two groups. Analysis was done in two steps. First, we examined the differences between the groups on anticipated completion of the responses for each demographic item: (a) age, (b) gender, (c) race, and (d) income. We used the proportional difference test and then calculated the potential differences in the proportion of participants’ anticipated willingness to respond between the groups with 95% confidence.
In the second step, we examined the proportional difference effect size, allowing us to determine the magnitude of nonresponse differences between group 1 (no explanation) and group 2 (CARDS explanation) because of the small sample size. We used Cohen’s h (arcsine transformation) effect size (Cohen 1988). Effect sizes (Cohen’s h) for explanation vs. no explanation differences are presented; Cohen’s h statistics are made positive for effects where the explanation outperforms the no explanation.
Results
Because the purpose of this study was to examine the impact of receiving the tailored explanation vs. no tailored explanation on reporting demographic items, we did not screen for or collect demographic data related to gender, race, gender, or income prior to the interview. However, because we recruited respondents from specific neighborhoods, the respondents were predominantly African American and lower income with an even divide between female and male respondents.
In general, there was a higher proportion of intended non-reporting on all demographic items for the group not receiving the explanation (group 1). For example, as shown in Table 3, 5.6% of participants in group 1 indicated they would not report age compared to 2.2% of participants in group 2 (those receiving explanation), constituting a 3.3% difference. However, this is not statistically significant [confidence interval (CI): −0.135, 0.0667].
While none of the item nonresponses for both groups is statistically significant in terms of proportional differences, we calculated Cohen’s h effect size to determine the magnitude of the effect. As shown in Table 3, there is a small (Cohen’s h=0.184) to moderate (Cohen’s h=0.342) effect in reducing demographic item nonresponse when an explanation was provided to respondents. Specifically, adding an explanation made a small improvement in reporting gender and age. All of the demographic items (age, gender, race, and income) are going in the same direction, indicating some improvement after the explanation was provided. The largest improvement in item nonresponse was to the income item. For group 1, the income question nonresponse was 35.19%. However, there is a 15.2% reduction of nonresponse to the income item when the explanation was provided to participants, constituting a moderate (Cohen’s h=0.34) effect size in the desired direction. This calculation was done focusing on leaving blank answers relative to accuracy of responses.
Discussion
These findings suggest that providing a meaningful verbal explanation to respondents may improve their motivation to respond to demographic items. The increased motivation to report the demographic items may be due to the specific explanation created by CARDS – an explanation tailored to the context of the respondents. This particular population has been found to mistrust researchers and research and to worry about what the researchers will do with the results of studies (Yancey, Ortega, and Kumanyika 2006). This specific explanation focused on how they, or someone like them, might benefit from participation. Thus, it may have motivated participants by replacing a fear of how results could be used with an anticipated benefit of using the results. Furthermore, because the verbal explanation emphasized the benefits to people like themselves that might result from providing demographic information, participants might have determined that the benefits outweighed any anticipated costs or they saw the benefits for their community and not just personal gain. This notion could be explained by Dillman’s (1978) theoretical appeal argument from a social exchange approach for increasing response in mail surveys. The social exchange approach states that respondents are more likely to respond if they believe that in the long run the anticipated benefits of responding outweigh the anticipated costs (Dillman 1978).
It is perhaps not surprising that participants in the group that did not receive a tailored explanation had higher item nonresponse. It could be that without the explanation participants are less motivated to complete the items for reasons described in the existing literature (Loosveldt et al. 1999a, 1999b). In the absence of an explanation tailored to the context of the respondents, the task of completing the demographic items may seem less attractive (Cialdini 1984).
While we did not directly measure interviewer effect on respondents or responses, we were surprised to observe that respondents were cooperative in disclosing their unwillingness to respond to certain items, as well as their reasons, to researchers who were not of the same racial and ethnic background. For instance, some respondents shared that they would not provide information about demographic items on a survey because they did not trust how the results would be used or because they feared that people hearing the results would make assumptions about them. This observation is inconsistent with existing research on racial and ethnic interviewer effect on respondents’ responses (Davis and Silver 2003; Samples et al. 2014). Researchers reported that African Americans provide different responses to surveys when interviewed by African American interviewers than with European American interviewers (Davis and Silver 2003; Lowe, Lustig, and Marrow 2011; Samples et al. 2014). Thus, it could be possible that the tailored explanation provided a forum of trust for the respondents in this study. However, more research is needed to better understand if and how tailored explanation can impact interviewer effect and item nonresponse rates.
Although our findings appear to be promising for reducing item nonresponse by using a verbal tailored, meaningful explanation, we did not compare or test how the verbal explanation differed from having the respondent read a written explanation on his or her own. Thus, future studies could examine whether verbal or a written explanation is more effective in reducing item nonresponse. Because we did not have African American interviewers deliver the tailored explanation, we do not know if there was an interviewer effect on response. Future studies can investigate how the racial/ethnic background of the interviewers affect item response rates using a tailored explanation. It is possible that there could be a social desirability bias occurring after we provided the explanation, but we did not evaluate this. Future research could also examine the effect of a tailored explanation on social desirability. In addition, we did not control for interviewer effect. It is possible that the three researchers who read the questions may have had different effects on response.
Acknowledgement
This project was partially supported by the Clinical and Translational Science Award (CTSA) program, through the NIH National Center for Advancing Translational Sciences (NCATS), grant UL1TR000427. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.