The Cooperative Extension Service (CES) provides non-formal education programs and information outreach throughout the United States to millions of Americans each year. In Florida, CES provides an array of programs on topics such as landscape maintenance and family financial management, which reach a cross-section of the state’s adult population. Each year a client survey is conducted to assess service quality.
Methodologically, quasi-general public surveys of this nature are of increasing interest to surveyors. They provide an opportunity to obtain and utilize e-mail addresses from potential respondents because of the pre-existing client relationship, which is unavailable in household surveys. Between 2008-2010, attempts were made to collect e-mail contact information from clients and test ways of using that information in an effort to improve survey response rates and data quality by offering a Web response option.
Studies were conducted testing various means of obtaining responses by Web and mail. As shown in Table 1, we offered a variety of treatment groups: mail-only (all three years); a choice of mail vs. Web (2008); Web preference, in which mail contacts provided a URL and asked people to respond over the Web (2008 and 2009); e-mail preference, in which contacts were made by mail and e-mail to those providing that information (2009 and 2010); and e-mail contact only (2010). Response rates to these treatment groups ranged from 46.0 percent to 64.5 percent, as shown in Table 1.
Among the major findings of these experiments is that when e-mail addresses are available, a majority of responses can be obtained over the Web (Israel 2011). These studies also confirm that offering a “choice” of response modes is ineffective in getting people to respond over the Web. In addition, withholding a mail option until the final postal contact can push a majority of respondents to the Web. However, the most effective strategy for obtaining Web responses was to combine postal contacts with quick follow-up e-mails providing electronic links, a strategy known as “e-mail augmentation” (Millar and Dillman 2011).
In this analysis, we compare item nonresponse rates for each of the three years in which these mixed-mode experiments were conducted to assess whether item nonresponse in Web vs. mail questionnaires has a differential effect on the quality of data obtained in this survey. Because our attempts to find more effective ways of encouraging responses over the Internet resulted in the use of a variety of treatment groups each year, we have pooled all Web questionnaires vs. all mail questionnaires across treatment groups for each year. We also report item nonresponse by the type of question structure in each questionnaire.
Methods
We used a unified mode design in constructing the mail and Web instruments (Dillman, Smyth, and Christian 2009). This included using the same questions and question order, as well as minimizing visual design differences. The questionnaire had 21 items, including screening and follow-up questions. The instrument contained four rating items in a grid, five open-ended items, four screened items, five Yes/No items and seven demographic items.
Determination of item nonresponse was based on whether any type of response, including a non-substantive one, was provided. Items which the respondent was directed to skip because of their answer to a branching question were not considered nonresponses. Partial completes were retained for the analysis.
Results
The item nonresponse rates were slightly higher for questionnaires returned via postal mail than for questionnaires answered on the Web (Table 2). These differences were statistically significant in studies 1 and 3, as shown by the probability value for the statistical tests.
Given the mode differences in item nonresponse seen in Table 2, we examined whether specific types of questions resulted in higher item nonresponse and whether mail and Web questionnaires differed by question type. Open-ended questions had the highest item nonresponse rate for both mail and Web modes (Table 3). However, the rate for the mail mode was consistently higher in all three studies, ranging from 18.9 percent to 19.8 percent for mail versus 13.2 percent to 15.9 percent for the Web mode.
Likewise, screened items, which follow branching questions, also had moderately high item nonresponse, ranging from 9.9 percent to 13.8 percent. Although the rates were significantly higher for mail respondents than Web respondents in study 1, the item nonresponse rates were statistically equivalent in studies 2 and 3. One reason for the moderately high item nonresponse was due to an open-ended screened item having nonresponse ranging from 19.6 to 31.3 percent. The other three items (all close-ended) had rates of .6 to 11.8 percent.
Demographic items, which were clustered at the end of the survey, showed lower item nonresponse rates than open-ended and screened questions. Although the item nonresponse rates for demographic questions were less than five percent in all studies and modes, the rate for mail was statistically higher than Web in studies 1 and 3. Study 2 followed the pattern but differences were not statistically significant.
The grid items had item nonresponse rates ranging from two to five percent. There was a tendency for higher item nonresponse in studies 1 and 2 for the Web mode than for the mail mode, and this was statistically significant in study 1. Because the grid items were at the beginning of the instrument, it appeared that some Web respondents skipped over the set while completing the remainder of the questionnaire. We speculate that they may have looked ahead to other questions and failed to go back. While the 2-page paper version of the survey would entail flipping the survey over and back, looking ahead on the Web would involve clicking through one or more screens and then using the back button to return to the initial screen. The latter offers more opportunities to stop short and might have resulted in the higher item nonresponse rate.
Finally, the item nonresponse rates for yes/no items were similar to those for the demographic and grid items. There was no difference between mail and Web in studies 1 and 2. In study 3, however, the item nonresponse rate was statistically lower for the Web instrument.
Discussion
In each of these annually-conducted studies, Web questionnaires obtained lower item nonresponse rates. The advantage for Web responses was seen primarily in open-ended items and to a lesser extent among the demographic items. However, total item nonresponse rates were approximately the same magnitude for mail and Web across the three studies, with differences of only one to two points between modes.
Given that mail-only surveys have generally shown higher response rates in mixed-mode surveys for address-based samples, survey professionals will need to consider the trade-offs of their survey design decisions. These trade-offs involve response rates and item nonresponse, as well as other aspects of data quality and total survey cost. However, the evidence seems to support the view that for surveys using open-ended questions (in order to get more detailed information in the respondents’ words) designers might want to employ methods effective at obtaining Web responses in mixed-modes surveys.