Loading [Contrib]/a11y/accessibility-menu.js
Skip to main content
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Blog
  • Subscribe
  • search

RSS Feed

Enter the URL below into your favorite RSS reader.

http://localhost:4529/feed
Articles
Vol. 5, Issue 2, 2012March 31, 2012 EDT

Item Nonresponse in a Client Survey of the General Public

Glenn Isreal, Alexa Lamm,
survey practice
https://doi.org/10.29115/SP-2012-0010
Survey Practice
Isreal, Glenn, and Alexa Lamm. 2012. “Item Nonresponse in a Client Survey of the General Public.” Survey Practice 5 (2). https:/​/​doi.org/​10.29115/​SP-2012-0010.
Save article as...▾

View more stats

Abstract

Item Nonresponse in a Client Survey of the General Public

The Cooperative Extension Service (CES) provides non-formal education programs and information outreach throughout the United States to millions of Americans each year. In Florida, CES provides an array of programs on topics such as landscape maintenance and family financial management, which reach a cross-section of the state’s adult population. Each year a client survey is conducted to assess service quality.

Methodologically, quasi-general public surveys of this nature are of increasing interest to surveyors. They provide an opportunity to obtain and utilize e-mail addresses from potential respondents because of the pre-existing client relationship, which is unavailable in household surveys. Between 2008-2010, attempts were made to collect e-mail contact information from clients and test ways of using that information in an effort to improve survey response rates and data quality by offering a Web response option.

Studies were conducted testing various means of obtaining responses by Web and mail. As shown in Table 1, we offered a variety of treatment groups: mail-only (all three years); a choice of mail vs. Web (2008); Web preference, in which mail contacts provided a URL and asked people to respond over the Web (2008 and 2009); e-mail preference, in which contacts were made by mail and e-mail to those providing that information (2009 and 2010); and e-mail contact only (2010). Response rates to these treatment groups ranged from 46.0 percent to 64.5 percent, as shown in Table 1.

Table 1  Sample Sizes, Unit Response Rates, and Item Nonresponse Rates by Design Treatment.
Sample Sizea Responses
Response Rateb
Mail Web Total Mail Web Total
Study 1 (2008)
 1. Mail only  437 282 – 282 64.5 – 64.5
 2. Mail/Web choice  436 224 34 258 51.4 7.8 59.2
 3. Web preference  445 104 130 234 23.4 29.2 52.6
Total 1318 610 164 774 46.3 12.4 58.7
Study 2 (2009)
Clients providing both postal and e-mail addresses
 1. Mail only  151 79 1c 80 52.3 .7 53.0
 2. Web preference  137 17 49 66 12.4 35.8 48.2
 3. E-mail preference  104 8 58 66 7.7 55.8 63.5
Clients providing only postal address
 4. Mail only  538 303 – 303 56.3 – 56.3
 5. Web preference  524 150 112 262 28.6 21.4 50.0
Total 1454 557 220 777 38.3 15.1 53.4
Study 3 (2010)
Clients providing both postal and e-mail addresses
 1. Mail only  344 202 – 202 58.7 – 58.7
 2. E-mail preference  357 82 130 212 23.0 36.4 59.4
 3. E-mail augmentation  356 182 41 223 51.1 11.5 62.6
 4. E-mail only  310 – 149 149 – 48.1 48.1
Clients providing only postal or e-mail address
 5. Mail only  539 318 – 318 59.0 – 59.0
 6. E-mail only  228 – 105 105 – 46.0 46.0
Total 2126 784 425 1209 36.9 20.0 56.9

aUndeliverable and ineligible were subtracted from the initial sample size.

bResponse rates were calculated as (total complete and partial responses/sample size)*100.

cOne client requested the Web mode.

Among the major findings of these experiments is that when e-mail addresses are available, a majority of responses can be obtained over the Web (Israel 2011). These studies also confirm that offering a “choice” of response modes is ineffective in getting people to respond over the Web. In addition, withholding a mail option until the final postal contact can push a majority of respondents to the Web. However, the most effective strategy for obtaining Web responses was to combine postal contacts with quick follow-up e-mails providing electronic links, a strategy known as “e-mail augmentation” (Millar and Dillman 2011).

In this analysis, we compare item nonresponse rates for each of the three years in which these mixed-mode experiments were conducted to assess whether item nonresponse in Web vs. mail questionnaires has a differential effect on the quality of data obtained in this survey. Because our attempts to find more effective ways of encouraging responses over the Internet resulted in the use of a variety of treatment groups each year, we have pooled all Web questionnaires vs. all mail questionnaires across treatment groups for each year. We also report item nonresponse by the type of question structure in each questionnaire.

Methods

We used a unified mode design in constructing the mail and Web instruments (Dillman, Smyth, and Christian 2009). This included using the same questions and question order, as well as minimizing visual design differences. The questionnaire had 21 items, including screening and follow-up questions. The instrument contained four rating items in a grid, five open-ended items, four screened items, five Yes/No items and seven demographic items.

Determination of item nonresponse was based on whether any type of response, including a non-substantive one, was provided. Items which the respondent was directed to skip because of their answer to a branching question were not considered nonresponses. Partial completes were retained for the analysis.

Results

The item nonresponse rates were slightly higher for questionnaires returned via postal mail than for questionnaires answered on the Web (Table 2). These differences were statistically significant in studies 1 and 3, as shown by the probability value for the statistical tests.

Table 2  Number of Responses and Mean Item Nonresponse Rate by Mode and study.
Responses
Item Nonresponse Rate
P-value
Mail Web Mail Web
Study 1 (2008) 610 164 7.3 5.1 .006
Study 2 (2009) 557 220 7.1 6.3 .306
Study 3 (2010) 784 425 7.3 5.5 .000

Given the mode differences in item nonresponse seen in Table 2, we examined whether specific types of questions resulted in higher item nonresponse and whether mail and Web questionnaires differed by question type. Open-ended questions had the highest item nonresponse rate for both mail and Web modes (Table 3). However, the rate for the mail mode was consistently higher in all three studies, ranging from 18.9 percent to 19.8 percent for mail versus 13.2 percent to 15.9 percent for the Web mode.

Table 3  Item Nonresponse Rate by Mode, Question Type and Study.
Mail Web P-value
Study 1(2008)
 Open-ended items 18.9 13.2 .000
 Screened items 13.8 9.9 .009
 Demographic items 4.9 2.4 .000
 Grid items 1.9 2.9 .080
 Yes/No items 3.2 3.0 .773
Study 2(2009)
 Open-ended items 19.4 14.0 .000
 Screened items 10.8 9.6 .333
 Demographic items 3.6 2.7 .104
 Grid items 2.0 4.6 .000
 Yes/No items 3.5 4.1 .366
Study 3 (2010)
 Open-ended items 19.8 15.9 .000
 Screened items 10.1 10.4 .720
 Demographic items 4.1 2.6 .000
 Grid items 2.4 2.2 .660
 Yes/No items 3.2 1.4 .000

Likewise, screened items, which follow branching questions, also had moderately high item nonresponse, ranging from 9.9 percent to 13.8 percent. Although the rates were significantly higher for mail respondents than Web respondents in study 1, the item nonresponse rates were statistically equivalent in studies 2 and 3. One reason for the moderately high item nonresponse was due to an open-ended screened item having nonresponse ranging from 19.6 to 31.3 percent. The other three items (all close-ended) had rates of .6 to 11.8 percent.

Demographic items, which were clustered at the end of the survey, showed lower item nonresponse rates than open-ended and screened questions. Although the item nonresponse rates for demographic questions were less than five percent in all studies and modes, the rate for mail was statistically higher than Web in studies 1 and 3. Study 2 followed the pattern but differences were not statistically significant.

The grid items had item nonresponse rates ranging from two to five percent. There was a tendency for higher item nonresponse in studies 1 and 2 for the Web mode than for the mail mode, and this was statistically significant in study 1. Because the grid items were at the beginning of the instrument, it appeared that some Web respondents skipped over the set while completing the remainder of the questionnaire. We speculate that they may have looked ahead to other questions and failed to go back. While the 2-page paper version of the survey would entail flipping the survey over and back, looking ahead on the Web would involve clicking through one or more screens and then using the back button to return to the initial screen. The latter offers more opportunities to stop short and might have resulted in the higher item nonresponse rate.

Finally, the item nonresponse rates for yes/no items were similar to those for the demographic and grid items. There was no difference between mail and Web in studies 1 and 2. In study 3, however, the item nonresponse rate was statistically lower for the Web instrument.

Discussion

In each of these annually-conducted studies, Web questionnaires obtained lower item nonresponse rates. The advantage for Web responses was seen primarily in open-ended items and to a lesser extent among the demographic items. However, total item nonresponse rates were approximately the same magnitude for mail and Web across the three studies, with differences of only one to two points between modes.

Given that mail-only surveys have generally shown higher response rates in mixed-mode surveys for address-based samples, survey professionals will need to consider the trade-offs of their survey design decisions. These trade-offs involve response rates and item nonresponse, as well as other aspects of data quality and total survey cost. However, the evidence seems to support the view that for surveys using open-ended questions (in order to get more detailed information in the respondents’ words) designers might want to employ methods effective at obtaining Web responses in mixed-modes surveys.

References

Dillman, D.A., J.D. Smyth, and L.M. Christian. 2009. Internet, Mail, and Mixed-Mode Surveys: The Tailored Design Method. 3rd ed. Hoboken, NJ: John Wiley & Sons, Inc.
Google Scholar
Israel, G.D. 2011. “Strategies for Obtaining Survey Responses from Extension Clients: Exploring the Role of e-Mail Requests.” Journal of Extension 49 (3). https:/​/​joe.org/​joe/​2011june/​a7.php.
Google Scholar
Millar, M.M., and D.A. Dillman. 2011. “Improving Response to Web and Mied-Mode Surveys.” Public Opinion Quarterly 75 (2): 249–69.
Google Scholar

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Powered by Scholastica, the modern academic journal management system