Loading [Contrib]/a11y/accessibility-menu.js
Skip to main content
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Blog
  • Subscribe
  • search

RSS Feed

Enter the URL below into your favorite RSS reader.

http://localhost:25298/feed
Articles
Vol. 5, Issue 2, 2012March 31, 2012 EDT

Introduction to Special Issue of Survey Practice on Item Nonresponse

Don Dillman,
survey researchnonresponse
https://doi.org/10.29115/SP-2012-0013
Survey Practice
Dillman, Don. 2012. “Introduction to Special Issue of Survey Practice on Item Nonresponse.” Survey Practice 5 (2). https:/​/​doi.org/​10.29115/​SP-2012-0013.
Save article as...▾

View more stats

Abstract

Introduction to Special Issue of Survey Practice on Item Nonresponse

Considerable interest exists in the joint use of Web and mail questionnaires to collect sample survey data. This mixed-mode interest stems from two important considerations. First, nearly one-third of all U.S. households either do not have Internet access or use it infrequently (less than once a week), making it unlikely that Internet surveys will be completed by representative samples of all households (Pew Research Center 2011). Second, address-based sampling (ABS), which appears to be our most adequate household sample frame (Iannacchione 2011), makes it possible to use mail contacts to request Web survey responses from those who are able and willing to respond in that way. For those who cannot or will not respond over the Internet, mail questionnaires provide an alternative means of responding that is likely to improve the demographic representativeness of respondents (Messer and Dillman 2011).

Previous research has suggested that one of the shortcomings of mail questionnaires is that they produce higher item nonresponse rates than either telephone or face-to-face interviewing (de Leeuw 1992; de Leeuw, Hox, and Huisman 2003). Research on item nonresponse rate differences between Web and mail surveys has produced mixed results: some studies have reported lower rates for Web surveys (Kiesler and Sproull 1986; Boyer et al. 2002; Kwak and Radler 2002; Denscombe 2006; Bech and Kristensen 2009), while one article found similar rates (Wolfe et al. 2009), and two others found higher rates for Web surveys (Manfreda and Vehovar 2002; Brečko and Carstens 2006). The variation in results suggests a need for additional research to clarify past findings. If mail surveys consistently achieve substantively higher item nonresponse rates than Web surveys, this could pose a potential problem to the pairing of Web and mail modes in a mixed-mode design.

Reasons exist for expecting that modern Internet survey methods using faster Web connections and more advanced construction capabilities will achieve lower item nonresponse than mail surveys. These design procedures include the use of individual page construction, automatic branching from screen questions and better control of the navigational path through the questionnaire (Kwak and Radler 2002). In theory, item nonresponse to Web questionnaires can be completely eliminated by requiring an answer to every item. However, that procedure may not be acceptable due to Institutional Review Board (IRB) requirements that all individual answers to survey questions be “voluntary” and the concern that requiring answers to every item may lower overall unit response to questionnaires from early terminations.

The four papers assembled for this special issue of Survey Practice were all presented in a thematic session at the 2011 AAPOR Conference. Each of these papers addresses the question of whether the quality of questionnaire responses differs across modes, and how combining mail and Web modes in data collection affects item nonresponse. All of the papers included here provide explicit comparisons of item nonresponse for mail and Web questionnaires using Web programming that did not require a response to each question, except when branching was required to determine the next appropriate question.

The first analysis by Messer, Edwards and Dillman examines item nonresponse for results from three surveys of state and regional address-based samples of households. The large number of respondents to each survey mode within three experiments makes it possible to examine the effects of demographic and questionnaire characteristics by mode.

The second analysis by Lesser, Newton and Yang also reports item nonresponse differences for Web and mail questionnaire respondents in general public surveys. The authors use an annual survey on quite similar topics over three years, and include a telephone mode for two of those years. This allows for comparisons between telephone vs. mail-only and Web+mail designs, which were being considered as data collection alternatives.

The third analysis by Israel and Lamm is a quasi-general public client survey of clients of the Florida Cooperative Extension Service, which provides nonformal education to all interested persons. They test item nonresponse for groups that provided e-mail contact information, which was then utilized to obtain higher proportions of Web vs. paper responses. They also provide insight into how item nonresponse varies for different question structures across multiple years.

The fourth paper by Millar and Dillman provides a Web and mail comparison of item nonresponse for university undergraduate students. Because of the availability of both postal and e-mail addresses, it was possible to assign students randomly to either Web or mail treatment groups. This eliminated choice of response mode as a contributor to Web vs. mail item nonresponse rates.

Results of these analyses are strikingly consistent. Overall paper questionnaires sent to the general public generate slightly higher item nonresponse than do the Web surveys. Differences by question type vary considerably, but questions eliciting higher item nonresponse in one mode tend to do so in the other modes as well. In contrast, the student survey exhibited no significant overall differences in item nonresponse across modes, but, as happened in the general public surveys, there were variations by question type.

Together these studies suggest that while the differences in item nonresponse between Web and mail should not be ignored in the design of mixed-mode surveys, these differences are sufficiently small that they do not constitute a major barrier to attempting to combine mail and Web data collection in the same mixed-mode survey.

References

Bech, M., and M.B. Kristensen. 2009. “Differential Response Rates in Postal and Web-Based Surveys among Older Respondents.” Survey Research Methods 3 (1): 1–6.
Google Scholar
Boyer, K.K., J.R. Olson, R.J. Calantone, and E.C. Jackson. 2002. “Print versus Electronic Surveys: A Comparison of Two Data Collection Methodologies.” Journal of Operations Management 20:357–73.
Google Scholar
Brečko, B. Neza, and R. Carstens. 2006. “Online Data Collection in SITES 2006: Paper Survey versus Web Survey - Do They Provide Comparable Results?” In Proceedings of the IEA International Research Conference (IRC 2006), 261–69. Washington, DC.
Google Scholar
de Leeuw, E.D., J. Hox, and M. Huisman. 2003. “Prevention and Treatment of Item Nonresponse.” Journal of Official Statistics 19 (2): 153–76.
Google Scholar
Denscombe, M. 2006. “Web-Based Questionnaires and the Mode Effect: An Evaluation Based on Completion Rates and Data Contents of near Identical Questionnaires Delivered in Different Modes.” Social Science Computer Review 24:246–54.
Google Scholar
Iannacchione, V.G. 2011. “The Changing Role of Address-Based Sampling in Survey Research.” Public Opinion Quarterly 75 (3): 556–75.
Google Scholar
Kiesler, S., and L.S. Sproull. 1986. “Response Effects in the Electronic Survey.” Public Opinion Quarterly 50 (3): 402–13.
Google Scholar
Kwak, N., and B. Radler. 2002. “A Comparison between Mail and Web Surveys: Response Pattern, Respondent Profile, and Data Quality.” Journal of Official Statistics 18 (2): 257–73.
Google Scholar
Leeuw, E.D. de, ed. 1992. Data Quality in Mail, Telephone, and Face-to-Face Surveys. Amsterdam: TT-Publicaties.
Google Scholar
Manfreda, K.L., and V. Vehovar. 2002. “Do Web and Mail Surveys Provide the Same Results?” Development in Social Science Methodology 18:149–69.
Google Scholar
Messer, B.L., and D.A. Dillman. 2011. “Surveying the General Public over the Internet Using Addressed-Based Sampling and Mail Contact Procedures.” Public Opinion Quarterly 75 (3): 429–57.
Google Scholar
Pew Research Center. 2011. “Data Tabulations, Social Side of the Internet.” November 28, 2011. https:/​/​www.pewinternet.org/​wp-content/​uploads/​sites/​9/​media/​Files/​Questionnaire/​2011/​PIAL-Social-Side-of-Internet_FINAL-Topline.pdf.
Wolfe, E.W., P.D. Converse, O. Airen, and N. Bodenhorn. 2009. “Unit and Item Nonresponses and Ancillary Information in Web- and Paper-Based Questionnaires Administered to School Counselors.” Measurement and Evaluation in Counseling and Development 21 (2): 92–103.
Google Scholar

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Powered by Scholastica, the modern academic journal management system