Considerable interest exists in the joint use of Web and mail questionnaires to collect sample survey data. This mixed-mode interest stems from two important considerations. First, nearly one-third of all U.S. households either do not have Internet access or use it infrequently (less than once a week), making it unlikely that Internet surveys will be completed by representative samples of all households (Pew Research Center 2011). Second, address-based sampling (ABS), which appears to be our most adequate household sample frame (Iannacchione 2011), makes it possible to use mail contacts to request Web survey responses from those who are able and willing to respond in that way. For those who cannot or will not respond over the Internet, mail questionnaires provide an alternative means of responding that is likely to improve the demographic representativeness of respondents (Messer and Dillman 2011).
Previous research has suggested that one of the shortcomings of mail questionnaires is that they produce higher item nonresponse rates than either telephone or face-to-face interviewing (de Leeuw 1992; de Leeuw, Hox, and Huisman 2003). Research on item nonresponse rate differences between Web and mail surveys has produced mixed results: some studies have reported lower rates for Web surveys (Kiesler and Sproull 1986; Boyer et al. 2002; Kwak and Radler 2002; Denscombe 2006; Bech and Kristensen 2009), while one article found similar rates (Wolfe et al. 2009), and two others found higher rates for Web surveys (Manfreda and Vehovar 2002; Brečko and Carstens 2006). The variation in results suggests a need for additional research to clarify past findings. If mail surveys consistently achieve substantively higher item nonresponse rates than Web surveys, this could pose a potential problem to the pairing of Web and mail modes in a mixed-mode design.
Reasons exist for expecting that modern Internet survey methods using faster Web connections and more advanced construction capabilities will achieve lower item nonresponse than mail surveys. These design procedures include the use of individual page construction, automatic branching from screen questions and better control of the navigational path through the questionnaire (Kwak and Radler 2002). In theory, item nonresponse to Web questionnaires can be completely eliminated by requiring an answer to every item. However, that procedure may not be acceptable due to Institutional Review Board (IRB) requirements that all individual answers to survey questions be “voluntary” and the concern that requiring answers to every item may lower overall unit response to questionnaires from early terminations.
The four papers assembled for this special issue of Survey Practice were all presented in a thematic session at the 2011 AAPOR Conference. Each of these papers addresses the question of whether the quality of questionnaire responses differs across modes, and how combining mail and Web modes in data collection affects item nonresponse. All of the papers included here provide explicit comparisons of item nonresponse for mail and Web questionnaires using Web programming that did not require a response to each question, except when branching was required to determine the next appropriate question.
The first analysis by Messer, Edwards and Dillman examines item nonresponse for results from three surveys of state and regional address-based samples of households. The large number of respondents to each survey mode within three experiments makes it possible to examine the effects of demographic and questionnaire characteristics by mode.
The second analysis by Lesser, Newton and Yang also reports item nonresponse differences for Web and mail questionnaire respondents in general public surveys. The authors use an annual survey on quite similar topics over three years, and include a telephone mode for two of those years. This allows for comparisons between telephone vs. mail-only and Web+mail designs, which were being considered as data collection alternatives.
The third analysis by Israel and Lamm is a quasi-general public client survey of clients of the Florida Cooperative Extension Service, which provides nonformal education to all interested persons. They test item nonresponse for groups that provided e-mail contact information, which was then utilized to obtain higher proportions of Web vs. paper responses. They also provide insight into how item nonresponse varies for different question structures across multiple years.
The fourth paper by Millar and Dillman provides a Web and mail comparison of item nonresponse for university undergraduate students. Because of the availability of both postal and e-mail addresses, it was possible to assign students randomly to either Web or mail treatment groups. This eliminated choice of response mode as a contributor to Web vs. mail item nonresponse rates.
Results of these analyses are strikingly consistent. Overall paper questionnaires sent to the general public generate slightly higher item nonresponse than do the Web surveys. Differences by question type vary considerably, but questions eliciting higher item nonresponse in one mode tend to do so in the other modes as well. In contrast, the student survey exhibited no significant overall differences in item nonresponse across modes, but, as happened in the general public surveys, there were variations by question type.
Together these studies suggest that while the differences in item nonresponse between Web and mail should not be ignored in the design of mixed-mode surveys, these differences are sufficiently small that they do not constitute a major barrier to attempting to combine mail and Web data collection in the same mixed-mode survey.