Introduction to Special Issue of Survey Practice on Item Nonresponse

Don A. Dillman Washington State University

Considerable interest exists in the joint use of Web and mail questionnaires to collect sample survey data. This mixed-mode interest stems from two important considerations. First, nearly one-third of all U.S. households either do not have Internet access or use it infrequently (less than once a week), making it unlikely that Internet surveys will be completed by representative samples of all households (Pew Research Center 2011). Second, address-based sampling (ABS), which appears to be our most adequate household sample frame (Iannacchione 2011), makes it possible to use mail contacts to request Web survey responses from those who are able and willing to respond in that way. For those who cannot or will not respond over the Internet, mail questionnaires provide an alternative means of responding that is likely to improve the demographic representativeness of respondents (Messer and Dillman 2011).

Previous research has suggested that one of the shortcomings of mail questionnaires is that they produce higher item nonresponse rates than either telephone or face-to-face interviewing (de Leeuw 1992; de Leeuw et al. 2003). Research on item nonresponse rate differences between Web and mail surveys has produced mixed results: some studies have reported lower rates for Web surveys (Kiesler and Sproull 1986; Boyer et al. 2002; Kwak and Radler 2002; Denscombe 2006; Bech and Kristensen 2009), while one article found similar rates (Wolfe et al., 2009), and two others found higher rates for Web surveys (Manfreda and Vehovar 2002; Brečko and Carstens 2006). The variation in results suggests a need for additional research to clarify past findings. If mail surveys consistently achieve substantively higher item nonresponse rates than Web surveys, this could pose a potential problem to the pairing of Web and mail modes in a mixed-mode design.

Reasons exist for expecting that modern Internet survey methods using faster Web connections and more advanced construction capabilities will achieve lower item nonresponse than mail surveys. These design procedures include the use of individual page construction, automatic branching from screen questions and better control of the navigational path through the questionnaire (Kwak and Radler 2002). In theory, item nonresponse to Web questionnaires can be completely eliminated by requiring an answer to every item. However, that procedure may not be acceptable due to Institutional Review Board (IRB) requirements that all individual answers to survey questions be “voluntary” and the concern that requiring answers to every item may lower overall unit response to questionnaires from early terminations.

The four papers assembled for this special issue of Survey Practice were all presented in a thematic session at the 2011 AAPOR Conference. Each of these papers addresses the question of whether the quality of questionnaire responses differs across modes, and how combining mail and Web modes in data collection affects item nonresponse. All of the papers included here provide explicit comparisons of item nonresponse for mail and Web questionnaires using Web programming that did not require a response to each question, except when branching was required to determine the next appropriate question.

The first analysis by Messer, Edwards and Dillman examines item nonresponse for results from three surveys of state and regional address-based samples of households. The large number of respondents to each survey mode within three experiments makes it possible to examine the effects of demographic and questionnaire characteristics by mode.

The second analysis by Lesser, Newton and Yang also reports item nonresponse differences for Web and mail questionnaire respondents in general public surveys. The authors use an annual survey on quite similar topics over three years, and include a telephone mode for two of those years. This allows for comparisons between telephone vs. mail-only and Web+mail designs, which were being considered as data collection alternatives.

The third analysis by Israel and Lamm is a quasi-general public client survey of clients of the Florida Cooperative Extension Service, which provides nonformal education to all interested persons. They test item nonresponse for groups that provided e-mail contact information, which was then utilized to obtain higher proportions of Web vs. paper responses. They also provide insight into how item nonresponse varies for different question structures across multiple years.

The fourth paper by Millar and Dillman provides a Web and mail comparison of item nonresponse for university undergraduate students. Because of the availability of both postal and e-mail addresses, it was possible to assign students randomly to either Web or mail treatment groups. This eliminated choice of response mode as a contributor to Web vs. mail item nonresponse rates.

Results of these analyses are strikingly consistent. Overall paper questionnaires sent to the general public generate slightly higher item nonresponse than do the Web surveys. Differences by question type vary considerably, but questions eliciting higher item nonresponse in one mode tend to do so in the other modes as well. In contrast, the student survey exhibited no significant overall differences in item nonresponse across modes, but, as happened in the general public surveys, there were variations by question type.

Together these studies suggest that while the differences in item nonresponse between Web and mail should not be ignored in the design of mixed-mode surveys, these differences are sufficiently small that they do not constitute a major barrier to attempting to combine mail and Web data collection in the same mixed-mode survey.

References

Bech and Kristensen 2009
Bech, M. and M.B. Kristensen. 2009. Differential response rates in postal and Web-based surveys among older respondents. Survey Research Methods 3(1): 1–6.
Boyer et al. 2002
Boyer, K.K., J.R. Olson, R.J. Calantone and E.C. Jackson. 2002. Print versus electronic surveys: a comparison of two data collection methodologies. Journal of Operations Management 20: 357–373.
Brečko and Carstens 2006
Brečko, B. Neza and R. Carstens. 2006. Online Data Collection in SITES 2006: paper survey versus Web survey – Do they provide comparable results? Proceedings of the IEA International Research Conference (IRC 2006). Washington, DC. pp. 261–269.
de Leeuw 1992
de Leeuw, E.D. 1992. Data quality in mail, telephone, and face-to-face surveys. TT-Publicaties, Amsterdam.
de Leeuw et al. 2003
de Leeuw, E.D., J. Hox and M. Huisman. 2003. Prevention and treatment of item nonresponse. Journal of Official Statistics 19(2): 153–176.
Denscombe 2006
Denscombe, M. 2006. Web-based questionnaires and the mode effect: an evaluation based on completion rates and data contents of near identical questionnaires delivered in different modes. Social Science Computer Review 24: 246–254.
Iannacchione 2011
Iannacchione, V.G. 2011. The changing role of address-based sampling in survey research. Public Opinion Quarterly 75(3): 556–575.
Kiesler and Sproull 1986
Kiesler, S. and L.S. Sproull. 1986. Response effects in the electronic survey. Public Opinion Quarterly 50(3): 402–413.
Kwak and Radler 2002
Kwak, N. and B. Radler. 2002. A comparison between mail and web surveys: response pattern, respondent profile, and data quality. Journal of Official Statistics 18(2): 257–273.
Manfreda and Vehovar 2002
Manfreda, K.L. and V. Vehovar. 2002. Do Web and mail surveys provide the same results? Development in Social Science Methodology 18: 149–169.
Messer and Dillman 2011
Messer, B.L. and D.A. Dillman. 2011. Surveying the general public over the internet using addressed-based sampling and mail contact procedures. Public Opinion Quarterly 75(3): 429–457.
Pew Research Center 2011
Pew Research Center: Data Tabulations, Social Side of the Internet. 28 November 2011. Available at: http://pewInternet.org/~/media//Files/Questionnaire/2011/PIAL%20Social%20Side%20of%20Internet_FINAL%20Topline.pdf.
Wolfe et al. 2009
Wolfe, E.W., P.D. Converse, O. Airen and N. Bodenhorn. 2009. Unit and item nonresponses and ancillary information in web- and paper-based questionnaires administered to school counselors. Measurement and Evaluation in Counseling and Development 21(2): 92–103.


About Survey Practice Our Global Partners Disclaimer
The Survey Practice content may not be distributed, used, adapted, reproduced, translated or copied for any commercial purpose in any form without prior permission of the publisher. Any use of this e-journal in whole or in part, must include the customary bibliographic citation and its URL.