Loading [Contrib]/a11y/accessibility-menu.js
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • search

    Sorry, something went wrong. Please try your search again.
    ×

    • Articles
    • Blog posts

RSS Feed

Enter the URL below into your favorite RSS reader.

https://www.surveypractice.org/feed
×
Articles
Vol. 13, Issue 1, 2020November 18, 2020 EDT

The Effect of Incentives and Mode of Contact on the Recruitment of Teachers into Survey Panels

Michael Robbins, Jennifer Hawes-Dawson,
survey panels non-response rates cost effectiveness non-response bias educator surveys
• https://doi.org/10.29115/SP-2020-0013
Photo by Tiffany Tertipes on Unsplash
Survey Practice
Robbins, Michael, and Jennifer Hawes-Dawson. 2020. “The Effect of Incentives and Mode of Contact on the Recruitment of Teachers into Survey Panels.” Survey Practice 13 (1). https://doi.org/10.29115/SP-2020-0013.
Save article as...▾
  • PDF
  • XML
  • Citation (BibTeX)
Data Sets/Files (5)
Download all (5)
  • Table 1. An outline of strategies implemented in the teacher recruitment experiment.
    Download
  • Table 2. The costs associated with the various activities used within the recruitment strategies.
    Download
  • Table 3. Results for recruitment of teachers.
    Download
  • Figure 1. Box plots of standardized (using Cohen’s h) differences between empaneled teachers and nonresponding teachers for categorical frequencies across several individual- and school-level characteristics
    Download
  • Table 4. p-Values from omnibus tests to see if there are differences between responders and nonresponders (Test 1) or if the differences vary by strategy used (Test 2)
    Download

Sorry, something went wrong. Please try again.

If this problem reoccurs, please contact Scholastica Support

Error message:

undefined
×

View more stats

Abstract

This article discusses an experiment run at the beginning of an effort to substantially enlarge the RAND American Teacher Panel, a nationally representative sample of K-12 public school teachers. Ten strategies were evaluated. We considered different modes of contact (FedEx vs. US Postal Service [USPS]), contingency status of incentives (pre- vs. promised incentive), amounts of incentive ($2, $10, $40, or $60), and types of incentive (cash, check, gift card, electronic). Strategies were compared in terms of response rate and cost effectiveness. Our study yielded several findings that should advance the literature: The use of FedEx clearly outperforms USPS, and the use of a moderate preincentive ($10) outperforms much larger promised incentives (up to $60) with respect to both responsiveness and cost-effectiveness. In addition, cash and check seem preferable to gift cards. Finally, we assess the potential for nonresponse bias by comparing enrollees to nonresponders across a variety of demographic characteristics for each strategy.

Introduction

Survey research has evolved dramatically over the past few decades with the onset of the Internet age. Technological advancements (e.g., the Internet) have made large populations easily accessible through a variety of means. However, perhaps as a consequence, response rates have plummeted, which has mandated increased scrutiny as to methods that may be employed to improve responsiveness among those surveyed since low response rates may jeopardize the generalizability of survey findings (Edwards et al. 2002; Singer 2002; Singer and Ye 2013). Nonetheless, high rates of nonresponse by no means guarantee substantial nonresponse bias, and low rates of nonresponse do not imply a lack of bias (R. M. Groves and Peytcheva 2008; Hedlin 2020; Leslie 1972).

All facets of the survey approach may affect response rates. Particular focus within the current literature is given to incentives (e.g., honoraria), which may be noncontingent (i.e., a preincentive) or contingent upon participation in the survey (i.e., a promised incentive). Research has conclusively shown that the use of incentives systematically yields higher rates of participation than the absence of incentives; however, most studies show only a marginal effect of the amount of the incentive (DeCamp and Manierre 2016; Godwin 1979; Hsu et al. 2017; James and Bolstein 1992; Jobber, Saunders, and Mitchell 2004). Furthermore, most researchers observe that noncontingent incentives yield higher response rates than contingent incentives (Church 1993; Edwards et al. 2002; Göritz 2006; Hsu et al. 2017; Martin, Abreu, and Winters 2001; Robbins et al. 2018); however, it is often assumed that noncontingent strategies will be less cost-effective since the incentive is sent to a multitude of nonrespondents. Studies have exhaustively searched for an optimal amount of preincentive (James and Bolstein 1990; Mizes, Fleece, and Roos 1984; Trussell and Lavrakas 2004), although findings can vary markedly across settings. Furthermore, researchers have struggled to illustrate circumstances where preincentives are more cost-effective (Cosgrove 2018; Dykema et al. 2015; Newby, Watson, and Woodliff 2003). Mercer et al. (2015) use regression modeling within a meta-analysis to claim that preincentives have the largest per-dollar impact on responsiveness among a variety of facets (including the use of promised incentives).

In addition, Singer and Ye (2013) outline several studies that failed to show that incentives affect data quality. Much research has shown that the use of preincentives will, if anything, reduce bias from nonresponse (Adua and Sharp 2010; Felderer et al. 2018; Robert M. Groves et al. 2006; Petrolia and Bhattacharjee 2009); however, Parsons and Manierre (2014) note a circumstance where preincentives exacerbate nonresponse bias among a random sample of college students.

Few studies compared small preincentives to larger promised incentives to see whether or not an optimal cost-effective strategy can be obtained by increasing the amount of the promised incentive. Researchers have considered combining small preincentives with larger promised incentives, but the findings are mixed (Dykema et al. 2011). Furthermore, the type of incentive (e.g., cash, personal check, gift card, lottery prize) may have a marked effect on response rates. Studies have shown that cash outperforms gift cards (Birnholtz et al. 2004; Brown et al. 2016), whereas lotteries are shown to perform relatively poorly (Warriner et al. 1996).

A wealth of literature has studied the effect of modes including in-person contact, phone contact, postal mailing, and emailing on response rates (Biemer et al. 2017; Dillman et al. 2009; Kaplowitz, Hadlock, and Levine 2004; Porter and Whitcomb 2003; Schaefer and Dillman 1998)—it is typically shown that in-person and phone contact perform superiorly. However, more nuanced characteristics, such as the delivery service, may have a pronounced effect on responsiveness as well. For example, Kasprzyk et al. (2001), in the context of a survey of physicians, show that sending the survey via FedEx yields higher response rates than the US Postal Service (USPS). However, this finding was contradicted by Doody et al. (2003), who failed to find an improvement from the use of FedEx over USPS in a survey of radiologic technologists. In addition, attention has been given to the manner in which letters are addressed, primarily relating to the degree of personalization (Dykema et al. 2019). Other factors considered within extant literature include questionnaire design (Dillman, Sinclair, and Clark 1993), confidentiality statements (Dillman et al. 1996), advance letters (Mann 2005), number and type of nonresponse follow-up contacts (James and Bolstein 1990; Rada 2005) and completion deadline (Roberts, McCrory, and Forthofer 1978).

Survey panels, wherein a sample from a population is recruited for participation so that they may be administered surveys at intermittent times, are becoming an increasingly popular manner of assessing public opinion (Bethlehem and Biffignandi 2011; Blom, Gathmann, and Krieger 2015; Callegaro et al. 2014; Cornesse et al. 2020; Stanley et al. 2020; Toepoel 2017; Yan, Kalla, and Broockman 2018). Although research regarding optimal methods for recruitment of individuals into survey panels is more sparse than analogous work for cross-sectional surveys, in general most of the previous findings appear to transfer to panels (e.g., Gritz 2004; Jäckle and Lynn 2008; Scherpenzeel and Toepoel 2012; Yu et al. 2017).

Assessing the opinion of educators in particular is of great interest to researchers and policymakers. Studies that evaluate the responsiveness of educators to surveys are comparatively rare (examples include Dykema et al. 2013; Coopersmith et al. 2016; Fraze et al. 2003; Jacob and Jacob 2012; Mertler 2002; Fraze et al. 2003; Robbins et al. 2018); however, indications are that educators (principals, in particular) are less responsive than the general public. Therefore, extra care should be afforded to the task of improving response rates among surveys of educators who are busy and therefore challenging-to-recruit via schools.

RAND Corporation’s American Teacher Panel (ATP) was established for the purpose of allowing researchers and policymakers an efficient tool by which the opinions and perspectives of teachers could be assessed in a robust and efficient manner. Here, we compare the efficacy of ten different recruitment strategies via an experiment that was performed in advance of a massive effort to expand the teacher panel during the 2016–2017 school year.

The recruitment experiment was designed to address the following four research questions:

(1) Are modest preincentives more effective than substantially larger promised incentives when recruiting teachers into survey panels?

(2) What mode of mailing (e.g., FedEx, USPS) is most effective for recruiting teachers into survey panels?

(3) What format of incentive (e.g., cash, check, electronic) is most effective for both pre- and promised incentives when recruiting teachers into survey panels?

(4) Can pre- and promised incentives be effectively used in tandem when recruiting teachers into survey panels?

The most effective strategy for recruitment is considered optimal with respect (1) response rates (i.e., what portion of contacted teachers enroll in the panel); (2) cost effectiveness (i.e., amount spent on recruitment activities per enrolled teacher); and (3) nonresponse bias (i.e., are there quantifiable differences between those who enroll vs. those who do not?).

Methods

The RAND American Teacher Panel is a standing survey panel of U.S. public school teachers. It is advertised as a “unique resource for obtaining [accurate responses on key issues from teachers] and measuring the evolving knowledge, attitudes, practices, and work conditions of educators nationwide.”[1] To facilitate administration of the Measure to Learn and Improve (MLI) surveys, the panel was marked to undergo a substantial expansion effort.[2]

Earlier recruitment efforts for the ATP involved scientific comparison of a limited number of strategies for recruitment via experimentation; these are outlined in Robbins et al. (2018).[3]

Despite the findings of Robbins et al. (2018), it was prudent to conduct an experiment to compare additional strategies given the size of the 2016–2017 recruitment effort and the desire to maximize response rates. Due to practical constraints, the experiment was administered within the first wave released on October 20, 2016. Ten separate strategies were considered within this experiment; these strategies were designed to address the research questions listed previously. Certain aspects did overlap in the strategies.[4]

The ten strategies, along with their scientific rationale for inclusion, are described below. Table 1 outlines the strategies (listing the types of incentives included and listing whether or not the recruitment package was sent via FedEx Next Day Delivery or USPS with a regular first class envelope). The first strategy (the “standard” strategy) represents the recruitment strategy that had proven to be optimal on the basis of the findings from Robbins et al. (2018). Preincentives were included in the recruitment package; promised incentives were sent upon enrollment in the panel.[5] All gift cards were pre-paid. Note that cash cannot be sent via FedEx; therefore, all strategies involving FedEx used USPS as the mode of contact. Strategies 2–10 had not yet been tested with teachers; these were each administered to a group of 250 randomly selected teachers. Since the standard strategy was known to perform acceptably, it was administered to the remaining teachers within the first wave of recruitment (n = 1,463).[6]

Table 1.An outline of strategies implemented in the teacher recruitment experiment.
Strategy Incentive Mode of initial contactf
Experimental arm # Brief description Pre- Promised
1 Standard $10 Target gift card FedEx
2 USPS standard $10 Target gift card USPS
3 Cash pre $10 cashd USPS
4 $40 Target promised $40 Target gift card FedEx
5 $60 Target promised $60 Target gift card FedEx
6 Check promised $40 checke FedEx
7 Electronic promiseda $40 Electronic FedEx
8 Combination $2 cashd $40 Target gift card USPS
9 ATP reportb $10 Target gift card FedEx
10 Email-lessc $10 Target gift card FedEx

a This gift card is an Amazon gift code as we do not send Target gift cards electronically.
b ATP report arm: A hard copy of a 10-page research report based on ATP survey data (Kaufman et al. 2016) is included in the recruitment package. Empirical and anecdotal evidence suggests that illustrating the validity of the panel is pivotal for responsiveness of new recruits.
c Email-less arm: We do not follow-up with nonrespondents via email. Nonresponders are sent two reminders; the first is sent by USPS, and the second is sent by FedEx.
d Cash cannot be sent via FedEx.
eThe check is made out in the respondent’s name.
f For experimental arms (1-9), nonresponders were sent up to six reminder emails to encourage enrollment.

Relative Cost

We also wished to perform a cost-benefit analysis that would compare the cost per recruited teacher under each recruitment strategy used. Table 2 gives a categorical breakdown of the differential sources of cost for each category.[7] All costs (with the exception of costs related to promised incentives) are listed as dollar amounts incurred for each teacher contacted; for promised incentives, costs are incurred on the basis of each teacher who agrees to participate in the panels (i.e., per recruit). Our analyses excluded some other costs.[8]

Table 2.The costs associated with the various activities used within the recruitment strategies.
Strategy Costs incurred per contacted educator Costs incurred per enrollee
Brief description Fixeda Mailing Follow-up ATP report Preincentive Promised incentive Incentive mailing
1 Standard $3.53 $4.50 $0.35 $0.00 $10.00 $0.00 $0.00
2 USPS standard $3.53 $1.32 $0.35 $0.00 $10.00 $0.00 $0.00
3 Cash pre $3.53 $1.32 $0.35 $0.00 $10.00 $0.00 $0.00
4 $40 Target promised $3.53 $4.50 $0.35 $0.00 $0.00 $40.00 $2.09
5 $60 Target promised $3.53 $4.50 $0.35 $0.00 $0.00 $60.00 $2.09
6 Check promised $3.53 $4.50 $0.35 $0.00 $0.00 $40.00 $2.09
7 Electronic promised $3.53 $4.50 $0.35 $0.00 $0.00 $40.00 $0.00
8 Combination $3.53 $1.32 $0.35 $0.00 $2.00 $40.00 $2.09
9 ATP report $3.53 $4.50 $0.35 $3.00 $10.00 $0.00 $0.00
10 Email-less $3.53 $4.50 $6.42 $0.00 $10.00 $0.00 $0.00

a Fixed costs include $0.78 for a brochure, $0.55 for printing, $1.75 for labor, and $0.45 for purchase of contact information.

Finally, we note that although the present study focuses on panel enrollment, similar analyses that involve participation in surveys administered to the panel yield analogous findings—these are omitted here for brevity given the breadth of surveys that have been administered.

Results

The strategies considered, along with the results of the experiment in terms of estimated response rates and costs per recruited panelist,[9] are illustrated in Table 3. The experimental strategies differ from the standard strategy only in ways described in the table and footnotes. Our results yield a wide variety of takeaways that will prove informative to survey methodologists across a variety of fields. Our key finding is that no strategy outperformed the standard strategy. Comparisons of the remaining strategies among one other potentially yield compelling results; however, these are often not statistically significant, perhaps due to the comparisons being underpowered. These takeaways indicate potential (untested) strategies that may outperform our preferred one. We discuss the results in detail with respect to each of our four primary research questions. Table 3 gives p-values for comparisons of the standard strategy (Strategy 1) to each of the other strategies; p-values for other comparisons are provided in the discussion below.

Table 3.Results for recruitment of teachers.
Strategy Response rate Cost per recruited panelist
Brief description n Estimate Standard error p-valuea Estimate Standard error p-valuea
1 Standard 1,213 27.5% 1.3% --- $66.95 $3.13 ---
2 USPS standard 250 18.0% 2.4% 0.002 $84.44 $11.40 0.139
3 Cash pre 250 23.2% 2.7% 0.167 $65.52 $7.54 0.860
4 $40 Target promised 250 16.0% 2.3% 0.000 $94.47 $7.59 0.001
5 $60 Target promised 250 19.2% 2.5% 0.007 $105.74 $5.66 0.000
6 Check promised 250 20.0% 2.5% 0.015 $83.99 $5.30 0.006
7 Electronic promised 250 22.0% 2.6% 0.075 $78.09 $4.54 0.043
8 Combination 250 19.6% 2.5% 0.010 $78.82 $4.71 0.036
9 ATP report 250 26.8% 2.8% 0.833 $79.78 $8.34 0.150
10 Email-less 250 24.0% 2.7% 0.262 $101.88 $11.47 0.003

ap-Values provide comparisons to Strategy 1.
bStandard errors and p-values for raw recruitment rates (noncumulative) are calculated using the normal approximation to the binomial. Standard errors for cost per recruit are calculated using the delta method. To elaborate, consider that the raw recruitment rate for a strategy is p̂, and the strategy costs $A per educator contacted in addition to $B per educator recruited, the estimated cost (in $) per recruit is given by μ̂ = A/p̂ + B, and the standard error of the estimated cost is given by \(s = A\sqrt{(1 - \widehat{p})/(n{\widehat{p}}^{3})}\). To compare the cost of the two strategies, we derive p-values using the fact that the statistic \(z = ({\widehat{\mu}}_{1} - {\widehat{\mu}}_{2})/\sqrt{s_{1} + s_{2}}\) has approximately a standard normal distribution under a null hypothesis of the two strategies incurring the same cost per recruit.

Are preincentives more effective than substantially larger promised incentives when recruiting teachers into survey panels?

By comparing recruitment Strategy 1 ($10 Target gift card as preincentive) to Strategies 4 ($40 Target promised) and 5 ($60 Target promised), we see that preincentives clearly outperform promised incentives of significantly larger amounts in terms of both response rates and cost effectiveness. Specifically, Strategy 1 had a 27.5% response rate, whereas strategies 4 and 5 had 16.0% and 19.6% response rates (p-values for tests of comparison: 0.000 and 0.007), respectively. Likewise, Strategy 1 is noticeably more cost effective than Strategies 4 and 5 (despite the first strategy involving “wasting” of gift cards that are sent to those who did not enroll): Strategy 1 costs $66.95 per enrollee, whereas strategies 4 and 5 cost $94.47 and $105.74 (with p-values of 0.001 and 0.000), respectively, per enrollee.

What mode of mailing (e.g., FedEx, USPS) is most effective for recruiting teachers into survey panels?

A comparison of Strategy 1 (FedEx) to Strategy 2 (USPS) indicates that using FedEx as a mode of contact outperforms USPS. That is, the FedEx strategy yields a notably higher response rate (27.5% vs. 18.0%; p-value = 0.002) and is estimated as being more cost effective, although the difference is not statistically significant ($66.95 vs. $84.44 per enrollee; p-value = 0.14), despite incurring a higher cost per contacted teacher.

What format of incentive (e.g., cash, check, electronic) is most effective for both pre- and promised incentives when recruiting teachers into survey panels?

We explore other, perhaps more nuanced, findings regarding the format of incentive. First, when comparing Strategies 2 (USPS + $10 gift card preincentive) and 3 (USPS + $10 cash preincentive), we do not see statistically significant evidence that the use of cash outperforms gift cards as a preincentive. However, the cash-based strategy is estimated as yielding a moderately higher response rate (23.2% vs. 18.0%; p-value = 0.151) and as being more cost effective ($65.52 vs. $84.44 per enrollee; p-value = 0.166).

The electronic gift card (Strategy 7) appears to perform the best among the strategies considered here that involved a gift card as a promised incentive. For instance, comparing its response rate and cost effectiveness to that of Strategy 4 ($40 Target gift card), we get p-values of 0.087 and 0.064, respectively, indicating statistical significance at the 10% level but not the 5% level.

Can pre- and promised incentives be effectively used in tandem when recruiting teachers into survey panels?

We see that the combination strategy (Strategy 8: $2 cash preincentive + $40 Target promised incentive) does not appear to outperform the analogous $10 USPS Target gift card preincentive strategy (Strategy 2); response rates are 19.6% vs. 18.0% (p-value = 0.647) and cost effectiveness is $78.82 vs. $84.44 (p-value = 0.459).

Finally, when comparing Strategies 1 and 10 (email-less), it appears that replacing six email follow-ups with two mail follow-ups did not notably hinder responsiveness (27.6% vs. 24.0%, p-value = 0.262) but did increase cost per enrollee ($66.95 vs. $101.88, p-value = 0.003).

Nonresponse Bias

Since the high rates of nonresponse observed among the various phases of recruitment have the potential to jeopardize the generalizability of findings from surveys that use the teacher panel, we are interested in studying nonresponse bias. The meta-analysis of R. M. Groves and Peytcheva (2008) concludes that nonresponse rates are a poor predictor of nonresponse bias, so the low response rates that we observe are not in themselves indicative of substantial bias. Nonetheless, we present diagnostics here that evaluate the potential for bias that stems from nonresponse at the recruitment phase. Specifically, we compare observable demographic-type characteristics of panel members to corresponding characteristics of nonrespondents (where nonrespondents include any recruited teacher who does not enroll in the panels). These analyses are repeated for all recruitment strategies considered. We examine individual-level characteristics including subject taught and gender. The remaining characteristics are descriptors of the teacher’s school. All characteristics are categorical—each characteristic is underpinned by at least two categories. In all, there are 27 categories underpinning the 8 characteristics.[10]

For each of the 27 categories (and each of the 10 strategies), we compare the portion of respondents who enrolled in the panel that fall into the respective category to the corresponding value for those who declined to enroll. For the purposes of standardization, the comparison is made using the quantity Cohen’s \(h\) (Cohen 1988), where \(h = 2\sin^{- 1}\sqrt{p_{2}} - 2\sin^{- 1}\sqrt{p_{1}}\), for two proportions \(p_{1}\) and \(p_{1}\). If for a given strategy, 50% of teachers who enroll are female, whereas 60% of those who decline to enroll are female, we would observe \(h = 0.2\) (which is commonly considered a small difference). We compare categorical frequencies for responders and nonresponders (instead of examining response rates within the various domains), as doing so allows comparisons across strategies that observe differing rates of response. Figure 1 shows box plots of the resulting 27 values of Cohen’s \(h\) for each strategy.

Figure 1
Figure 1.Box plots of standardized (using Cohen’s h) differences between empaneled teachers and nonresponding teachers for categorical frequencies across several individual- and school-level characteristics

To quantify the statistical significance of discrepancies observed, we report (for each strategy and each characteristic) a p-value of an omnibus test that assesses (jointly across all categories of a variable) the presence of differences in the categorical frequencies of panel members vs. nonresponders. These comparisons are performed using Fisher’s exact test; the results are shown in Table 4. However, some nonresponse bias may be unavoidable—perhaps the more relevant issue is whether or not the bias is affected by the strategy implemented. Hence, for each strategy used to recruit teachers, we report a p-value of an analogous test that compares enrolled panel members sampled using each strategy to enrolled panelists sampled using Strategy 1.

Figure 1 shows that most of the values of Cohen’s \(h\) are small (77% of all the computed values observe \(|h| \leq 0.2\)), although outlying values are present.[11] However, the vast majority of differences observed between enrollees and nonresponders for the other strategies are not statistically significant for all strategies, although tests for these strategies may be underpowered.[12] In addition, from Figure 1 and Table 4, there is no compelling evidence that the degree of nonresponse bias is affected by the strategy implemented.

Table 4.p-Values from omnibus tests to see if there are differences between responders and nonresponders (Test 1) or if the differences vary by strategy used (Test 2)
    Subject Gender Region % Free/ Red. price lunch Size Urbanicity Level % Minority students
Test 1: p-values for differences between responders and non-responders Strategy 1 0.735 0.023 0.006 0.067 0.591 0.238 0.153 0.843
Strategy 2 0.379 0.616 0.479 0.782 0.400 0.751 0.120 0.389
Strategy 3 0.613 0.092 0.256 0.710 0.768 0.125 0.787 0.065
Strategy 4 0.577 0.035 0.435 0.316 0.003 0.443 0.146 0.823
Strategy 5 0.969 0.263 0.885 0.098 0.393 0.942 0.435 0.353
Strategy 6 0.178 0.056 0.854 0.338 0.308 0.007 0.538 0.389
Strategy 7 0.977 1.000 0.828 0.811 0.188 0.739 0.995 0.147
Strategy 8 0.768 0.723 0.994 0.958 0.123 0.715 0.197 0.157
Strategy 9 0.189 0.806 0.096 0.260 0.548 0.633 0.360 0.580
Strategy 10 0.431 0.104 0.052 0.237 0.030 0.692 0.171 0.539
Test 2: p-values for differences between responders from Strategy #1 and responders from the respective strategy Strategy 2 0.350 0.518 0.149 0.381 0.299 0.577 0.431 0.898
Strategy 3 0.564 0.967 0.781 0.339 0.842 0.161 0.867 0.170
Strategy 4 0.243 0.539 0.631 0.134 0.006 0.542 0.433 0.726
Strategy 5 0.970 1.000 0.857 0.447 0.401 0.863 0.596 0.265
Strategy 6 0.725 0.728 0.636 0.924 0.553 0.280 0.731 0.678
Strategy 7 0.520 0.459 0.787 0.230 0.826 0.827 0.305 0.077
Strategy 8 0.833 0.141 0.671 0.256 0.334 0.817 0.243 0.610
Strategy 9 0.266 0.474 0.507 0.257 0.665 0.391 0.185 0.435
Strategy 10 0.431 1.000 0.768 0.271 0.050 0.517 0.265 0.334

Conclusions

Our study enhances the literature on surveying educators and recruitment into survey panels in general by establishing many interesting findings as a result of the experiment considered here. We illustrate that teachers respond at much higher rates when FedEx (in lieu of USPS) is used for mailing of recruitment materials; in fact, the increased response rate is more than enough to offset the higher costs. We hypothesize that the improvement is explained by a FedEx package appearing more official and being more likely to catch the attention of a recipient. Further, we establish that a reasonable preincentive ($10) is more successful at achieving higher response rates than promised incentives of a much larger amount (up to $60). In fact, the use of the promised incentive is shown to be less cost effective despite up to three quarters of the recipients of the preincentive failing to enroll.


  1. https://www.rand.org/education-and-labor/projects/aep.html

  2. Teacher recruitment for the MLI-related expansion was designed to take place over 16 waves during the 2016–2017 school year, wherein approximately 4,000 teachers would be contacted in each wave. The expansion effort (designed to develop state-level subpanels in 22 states plus New York City) resulted in the enrollment of 19,500 teachers. Approximately 63,000 teachers were contacted across the 16 recruitment waves. Further details regarding this expansion effort can be found in Robbins and Grant (2020).

  3. To briefly summarize the findings in Robbins et al. (2018), five strategies were evaluated: (1) a $10 contingent gift card (10.5% response rate at a cost of $69.93/enrollee); (2) a $10 noncontingent gift card (21.2%, $77.92); (3) a $20 noncontingent gift card (22.8%, $116.32); (4) a $20 contingent electronic gift card (1.2%, $1,626.24); and (5) a $10 contingent gift card with phone follow-up (15.6%, $191.54). All strategies (except for the fourth) involved FedEx mailing, and all strategies involved email follow-up with nonresponders. In consideration of both response rate and cost effectiveness, the strategy involving the $10 noncontingent gift card was deemed preferable. The experimentation was performed between December 2014 and February 2015.

  4. Each of the ten strategies involves the following. The targeted teacher is sent a recruitment package via FedEx or the USPS. The package contains a RAND recruitment letter that invites the teachers to join the panel, as well as a brochure that describes the panel. Further, endorsement letters from educator unions (this includes the National Education Association and the American Federation of Teachers) are included in the package. In limited cases (OK and NC), the package includes an endorsement letter from state education departments. Incentives are also included in the package, although this varies by strategy. Contacted teachers enroll in the panel by completing a brief 5-minute enrollment form online or returning their enrollment form (3 pages) via mail. (Approximately 87% of the forms returned as part of this experiment were submitted online.) Each recruitment package included a hardcopy teacher enrollment form and a RAND business reply envelop so that teachers were simultaneously given the option of enrolling into the ATP via mail or Internet. We also sent all teacher recruits an email invitation to enroll in the ATP. Unless otherwise noted, nonresponding teachers are sent weekly reminders by email for six weeks following the mailing of the initial recruitment package to encourage them to enroll in the ATP. The recruitment materials for all experimental groups were mailed on October 20, 2016. FedEx shipments were delivered the next day. Teachers who enroll in the ATP are then contacted via email and asked to participate in future online panel surveys (up to 4 per year) at later dates. Contacted teachers were also notified that they would be given gift cards for taking surveys administered as part of the ATP. (The amount of these gift cards depends upon the survey length; $25 is common.) Note that same cover letter was included in the recruitment package for all recruitment strategies; however, one paragraph in this letter was modified as needed to describe the incentive used (when an incentive is used).

  5. Although earlier studies have shown that preincentives outperform promised incentives of similar amounts (e.g., Robbins et al. 2018), our goal here is to compare preincentives to notably larger promised incentives.

  6. Contact information for teachers targeted for recruitment was purchased from a vendor. The information includes name, email, and school address and phone number.

  7. These cost sources included purchasing the sampling list (i.e., the list of teachers purchased from a vendor), mailing recruitment packages (including the cost of shipping materials via FedEx or USPS, the cost of printing recruitment materials (brochure, enrollment forms, return envelopes, ATP report), the labor required to assemble the recruitment packages), and costs of pre- and promised incentives.

  8. Costs excluded in this analysis include the fixed cost required to encode the demographic data collection in a Web portal, and costs for researcher time (e.g., time spent designing the survey instrument and recruitment tactics, time spent compiling and analyzing findings, etc.)—these costs can be harder to quantify and are mostly independent of the specific recruitment strategy employed. (That is, these costs do not influence comparative cost-effectiveness of the various strategies.)

  9. Standard errors for costs per recruited panelist are approximated algebraically using the delta method; see the footnotes to Table 3 for details.

  10. The eight characteristics (with the categories that underpin each of them in parentheses) are Subject (ELA/social studies, general elementary, math/science, other); Gender (male, female); Region (Midwest, Northeast, South, West); Percent Free/Reduced Price Lunch (0%–25%, 25%–75%, 75%–100%); Size (small, medium, large); Urbanicity (city, suburb, town, rural); Level (elementary, middle, high, other); and Percent Minority Students (0%–25%, 25%–75%, 75%–100%). Subject and gender are measured at the teacher level; other characteristics are at the school level.

  11. Strategy 4 ($40 Target promised) observes a couple of outlying values; e.g., for this strategy, 20% of enrollees are at medium-sized schools vs. 47.6% of nonresponders (h=0.60), and 10.0% of enrollees are male vs. 27.1% of nonresponders (h=0.45). Table 4 indicates that these differences may be statistically significant. For Strategy 1 ($10 Target gift card as preincentive with FedEx mailing), which had the largest sample size, most of the values of Cohen’s h are small. However, we see some evidence that nonresponders differ from enrollees across the characteristics considered for this strategy. Specifically, enrollees in Strategy 1 are 15% male, whereas nonresponders are 21% male. Likewise, 4.5% of enrollees are from the Northwest and 12.1% are from schools with 0-25% free and reduced-price lunch; whereas, corresponding values are 10.7% and 18.1% for nonresponders. These differences show statistical significance at the 5% level.

  12. Naturally, these differences do not appear as statistically significant when multiple testing adjustments, such as those of Benjamini and Hochberg (1995), are applied. (Details are omitted for brevity.)

Submitted: June 28, 2020 EDT

Accepted: October 13, 2020 EDT

References

Adua, L., and J.S. Sharp. 2010. “Examining Survey Participation and Response Quality: The Significance of Topic Salience and Incentives.” Survey Methodology 36: 95–109.
Google Scholar
Benjamini, Y., and Y. Hochberg. 1995. “Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing.” Journal of the Royal Statistical Society: Series B 57: 289–300.
Google Scholar
Bethlehem, J., and S. Biffignandi. 2011. Handbook of Web Surveys. Vol. 567). John Wiley & Sons.
Google Scholar
Biemer, P.P., J. Murphy, S. Zimmer, C. Berry, G. Deng, and K. Lewis. 2017. “Using Bonus Monetary Incentives to Encourage Web Response in Mixed-Mode Household Surveys.” Journal of Survey Statistics and Methodology 6: 240–61.
Google Scholar
Birnholtz, Jeremy P., Daniel B. Horn, Thomas A. Finholt, and Sung Joo Bae. 2004. “The Effects of Cash, Electronic, and Paper Gift Certificates as Respondent Incentives for a Web-Based Survey of Technologically Sophisticated Respondents.” Social Science Computer Review 22 (3): 355–62. https://doi.org/10.1177/0894439304263147.
Google Scholar
Blom, A.G., C. Gathmann, and U. Krieger. 2015. “Setting up an Online Panel Representative of the General Population: The German Internet Panel.” Field Methods 27: 391–408.
Google Scholar
Brown, Julie A., Carl A. Serrato, Mildred Hugh, Michael H. Kanter, Karen L. Spritzer, and Ron D. Hays. 2016. “Effect of a Post-Paid Incentive on Response Rates to a Web-Based Survey.” Survey Practice 9 (1): 1–7. https://doi.org/10.29115/sp-2016-0001.
Google Scholar
Callegaro, Mario, Reg Baker, Jelke Bethlehem, Anja S. Göritz, Jon A. Krosnick, and Paul J. Lavrakas, eds. 2014. Online Panel Research: A Data Quality Perspective. West Sussex, UK: John Wiley & Sons, Ltd. https://doi.org/10.1002/9781118763520.
Google Scholar
Church, Allan H. 1993. “Estimating the Effect of Incentives on Mail Survey Response Rates: A Meta-Analysis.” Public Opinion Quarterly 57 (1): 62. https://doi.org/10.1086/269355.
Google Scholar
Cohen, Jacob. 1988. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale, NJ, USA: Lawrence Erlbaum Associates.
Google Scholar
Coopersmith, Jared, Lisa Klein Vogel, Timothy Bruursema, and Kathleen Feeney. 2016. “Effects of Incentive Amount and Type of Web Survey Response Rates.” Survey Practice 9 (1): 2822. https://doi.org/10.29115/sp-2016-0002.
Google Scholar
Cornesse, Carina, Annelies G Blom, David Dutwin, Jon A Krosnick, Edith D De Leeuw, Stéphane Legleye, Josh Pasek, et al. 2020. “A Review of Conceptual Approaches and Empirical Evidence on Probability and Nonprobability Sample Survey Research.” Journal of Survey Statistics and Methodology 8 (1): 4–36. https://doi.org/10.1093/jssam/smz041.
Google Scholar
Cosgrove, John A. 2018. “Using a Small Cash Incentive to Increase Survey Response.” Administration and Policy in Mental Health and Mental Health Services Research 45 (5): 1–7. https://doi.org/10.1007/s10488-018-0866-x.
Google Scholar
DeCamp, Whitney, and Matthew J. Manierre. 2016. “‘Money Will Solve the Problem’: Testing the Effectiveness of Conditional Incentives for Online Surveys.” Survey Practice 9 (1): 2823. https://doi.org/10.29115/sp-2016-0003.
Google Scholar
Dillman, Don A., Glenn Phelps, Robert Tortora, Karen Swift, Julie Kohrell, Jodi Berck, and Benjamin L. Messer. 2009. “Response Rate and Measurement Differences in Mixed-Mode Surveys Using Mail, Telephone, Interactive Voice Response (IVR) and the Internet.” Social Science Research 38 (1): 1–18. https://doi.org/10.1016/j.ssresearch.2008.03.007.
Google Scholar
Dillman, Don A., Michael D. Sinclair, and Jon R. Clark. 1993. “Effects of Questionnaire Length, Respondent-Friendly Design, and a Difficult Question on Response Rates for Occupant-Addressed Census Mail Surveys.” Public Opinion Quarterly 57 (3): 289. https://doi.org/10.1086/269376.
Google Scholar
Dillman, Don A., Eleanor Singer, Jon R. Clark, and James B. Treat. 1996. “Effects of Benefits Appeals, Mandatory Appeals, and Variations in Statements of Confidentiality on Completion Rates for Census Questionnaires.” Public Opinion Quarterly 60 (3): 376. https://doi.org/10.1086/297759.
Google Scholar
Doody, M. M., A. S. Sigurdson, D. Kampa, K. Chimes, B. H. Alexander, E. Ron, R. E. Tarone, and M. S. Linet. 2003. “Randomized Trial of Financial Incentives and Delivery Methods for Improving Response to a Mailed Questionnaire.” American Journal of Epidemiology 157 (7): 643–51. https://doi.org/10.1093/aje/kwg033.
Google Scholar
Dykema, Jennifer, Nadia Assad, Griselle Sanchez-Diettert, Kelly Elver, and John Stevenson. 2019. “What’s in a Name? Effects of Alternative Forms of Addressing Households on Response Rates and Data Quality in an Address-Based Mail Survey.” Field Methods 31 (1): 39–55. https://doi.org/10.1177/1525822x18812761.
Google Scholar
Dykema, Jennifer, Karen Jaques, Kristen Cyffka, Nadia Assad, Rae Ganci Hammers, Kelly Elver, Kristen C. Malecki, and John Stevenson. 2015. “Effects of Sequential Prepaid Incentives and Envelope Messaging in Mail Surveys.” Public Opinion Quarterly 79 (4): 906–31. https://doi.org/10.1093/poq/nfv041.
Google Scholar
Dykema, Jennifer, John Stevenson, Brendan Day, Sherrill L. Sellers, and Vence L. Bonham. 2011. “Effects of Incentives and Prenotification on Response Rates and Costs in a National Web Survey of Physicians.” Evaluation & the Health Professions 34 (4): 434–47. https://doi.org/10.1177/0163278711406113.
Google ScholarPubMed CentralPubMed
Dykema, Jennifer, John Stevenson, Lisa Klein, Yujin Kim, and Brendan Day. 2013. “Effects of E-Mailed Versus Mailed Invitations and Incentives on Response Rates, Data Quality, and Costs in a Web Survey of University Faculty.” Social Science Computer Review 31 (3): 359–70. https://doi.org/10.1177/0894439312465254.
Google Scholar
Edwards, P., I. Roberts, M. Clarke, C. DiGuiseppi, S. Pratap, R. Wentz, and I. Kwan. 2002. “Increasing Response Rates to Postal Questionnaires: Systematic Review.” British Journal of Medicine 324 (7347): 1183–91. https://doi.org/10.1136/bmj.324.7347.1183.
Google ScholarPubMed CentralPubMed
Felderer, Barbara, Gerrit Müller, Frauke Kreuter, and Joachim Winter. 2018. “The Effect of Differential Incentives on Attrition Bias: Evidence from the PASS Wave 3 Incentive Experiment.” Field Methods 30 (1): 56–69. https://doi.org/10.1177/1525822x17726206.
Google Scholar
Fraze, Steve D., Kelly K. Hardin, M. Todd Brashears, Jacqui L. Haygood, and James H. Smith. 2003. “The Effects of Delivery Mode upon Survey Response Rate and Perceived Attitudes of Texas Agri-Science Teachers.” Journal of Agricultural Education 44 (2): 27–37. https://doi.org/10.5032/jae.2003.02027.
Google Scholar
Godwin, R. Kenneth. 1979. “The Consequences of Large Monetary Incentives in Mail Surveys of Elites.” Public Opinion Quarterly 43 (3): 378–87. https://doi.org/10.1086/268528.
Google Scholar
Göritz, A.S. 2006. “Incentives in Web Studies: Methodological Issues and a Review.” International Journal of Internet Science 1: 58–70.
Google Scholar
Gritz, Anja S. 2004. “The Impact of Material Incentives on Response Quantity, Response Quality, Sample Composition, Survey Outcome and Cost in Online Access Panels.” International Journal of Market Research 46 (3): 327–45. https://doi.org/10.1177/147078530404600307.
Google Scholar
Groves, R. M., and E. Peytcheva. 2008. “The Impact of Nonresponse Rates on Nonresponse Bias: A Meta-Analysis.” Public Opinion Quarterly 72 (2): 167–89. https://doi.org/10.1093/poq/nfn011.
Google Scholar
Groves, Robert M., Mick P. Couper, Stanley Presser, Eleanor Singer, Roger Tourangeau, Giorgina Piani Acosta, and Lindsay Nelson. 2006. “Experiments in Producing Nonresponse Bias.” Public Opinion Quarterly 70 (5): 720–36. https://doi.org/10.1093/poq/nfl036.
Google Scholar
Hedlin, Dan. 2020. “Is There a ‘safe Area’ Where the Nonresponse Rate Has Only a Modest Effect on Bias despite Non‐ignorable Nonresponse?” International Statistical Review, January. https://doi.org/10.1111/insr.12359.
Google Scholar
Hsu, Joanne W., Maximilian D. Schmeiser, Catherine Haggerty, and Shannon Nelson. 2017. “The Effect of Large Monetary Incentives on Survey Completion: Evidence from a Randomized Experiment with the Survey of Consumer Finances.” Public Opinion Quarterly 81 (3): 736–47. https://doi.org/10.1093/poq/nfx006.
Google Scholar
Jäckle, A.E., and P. Lynn. 2008. “Respondent Incentives in a Multi-Mode Panel Survey: Cumulative Effects on Nonresponse and Bias.” Survey Methodology 34: 105–17.
Google Scholar
Jacob, Robin Tepper, and Brian Jacob. 2012. “Prenotification, Incentives, and Survey Modality: An Experimental Test of Methods to Increase Survey Response Rates of School Principals.” Journal of Research on Educational Effectiveness 5 (4): 401–18. https://doi.org/10.1080/19345747.2012.698375.
Google Scholar
James, Jeannine M., and Richard Bolstein. 1990. “The Effect of Monetary Incentives and Follow-Up Mailings on the Response Rate and Response Quality in Mail Surveys.” Public Opinion Quarterly 54 (3): 346. https://doi.org/10.1086/269211.
Google Scholar
———. 1992. “Large Monetary Incentives and Their Effect on Mail Survey Response Rates.” Public Opinion Quarterly 56 (4): 442. https://doi.org/10.1086/269336.
Google Scholar
Jobber, David, John Saunders, and Vince-Wayne Mitchell. 2004. “Prepaid Monetary Incentive Effects on Mail Survey Response.” Journal of Business Research 57 (1): 21–25. https://doi.org/10.1016/s0148-2963(02)00280-1.
Google Scholar
Kaplowitz, M. D., T. D. Hadlock, and R. Levine. 2004. “A Comparison of Web and Mail Survey Response Rates.” Public Opinion Quarterly 68 (1): 94–101. https://doi.org/10.1093/poq/nfh006.
Google Scholar
Kasprzyk, Danuta, Daniel E. Montaño, Janet S. St. Lawrence, and William R. Phillips. 2001. “The Effects of Variations in Mode of Delivery and Monetary Incentive on Physicians’ Responses to a Mailed Survey Assessing STD Practice Patterns.” Evaluation & the Health Professions 24 (1): 3–17. https://doi.org/10.1177/01632780122034740.
Google Scholar
Leslie, Larry L. 1972. “Are High Response Rates Essential to Valid Surveys?” Social Science Research 1 (3): 323–34. https://doi.org/10.1016/0049-089x(72)90080-4.
Google Scholar
Mann, C. B. 2005. “Do Advance Letters Improve Pre-Election Forecast Accuracy?” Public Opinion Quarterly 69 (4): 561–71. https://doi.org/10.1093/poq/nfi051.
Google Scholar
Martin, Elisabeth, Denise Abreu, and Franklin Winters. 2001. “Money and Motive: Effects of Incentives on Panel Attrition in the Survey of Income and Program Participation.” Journal of Official Statistics 17: 267–84.
Google Scholar
Mercer, Andrew, Andrew Caporaso, David Cantor, and Reanne Townsend. 2015. “How Much Gets You How Much? Monetary Incentives and Response Rates in Household Surveys.” Public Opinion Quarterly 79 (1): 105–29. https://doi.org/10.1093/poq/nfu059.
Google Scholar
Mertler, Craig A. 2002. “Patterns of Response and Nonresponse from Teachers to Traditional and Web Surveys.” Practical Assessment, Research and Evaluation 8. https://doi.org/10.7275/2KDF-G675.
Google Scholar
Mizes, J. Scott, E. Louis Fleece, and Cindy Roos. 1984. “Incentives for Increasing Return Rates: Magnitude Levels, Response Bias, and Format.” Public Opinion Quarterly 48 (4): 794–800. https://doi.org/10.1086/268885.
Google Scholar
Newby, Rick, John Watson, and David Woodliff. 2003. “SME Survey Methodology: Response Rates, Data Quality, and Cost Effectiveness.” Entrepreneurship Theory and Practice 28 (2): 163–72. https://doi.org/10.1046/j.1540-6520.2003.00037.x.
Google Scholar
Parsons, Nicholas L., and Matthew J. Manierre. 2014. “Investigating the Relationship among Prepaid Token Incentives, Response Rates, and Nonresponse Bias in a Web Survey.” Field Methods 26 (2): 191–204. https://doi.org/10.1177/1525822x13500120.
Google Scholar
Petrolia, Daniel R., and Sanjoy Bhattacharjee. 2009. “Revisiting Incentive Effects: Evidence from a Random-Sample Mail Survey on Consumer Preferences for Fuel Ethanol.” Public Opinion Quarterly 73 (3): 537–50. https://doi.org/10.1093/poq/nfp038.
Google Scholar
Porter, Stephen R., and Michael E. Whitcomb. 2003. “The Impact of Contact Type on Web Survey Response Rates.” Public Opinion Quarterly 67 (4): 579–88. https://doi.org/10.1086/378964.
Google Scholar
Rada, Vidal D. de. 2005. “The Effect of Follow-Up Mailings on the Response Rate and Response Quality in Mail Surveys.” Quality & Quantity 39 (1): 1–18. https://doi.org/10.1007/s11135-004-5950-5.
Google Scholar
Robbins, Michael W., and D. Grant. 2020. RAND American Educator Panels (AEP) Technical Description. Santa Monica, CA: RAND Corporation.
Google Scholar
Robbins, Michael W., Geoffrey Grimm, Brian Stecher, and V. Darleen Opfer. 2018. “A Comparison of Strategies for Recruiting Teachers into Survey Panels.” SAGE Open 8 (3): 215824401879641. https://doi.org/10.1177/2158244018796412.
Google ScholarPubMed CentralPubMed
Roberts, Robert E., Owen F. McCrory, and Ronald N. Forthofer. 1978. “Further Evidence on Using a Deadline to Stimulate Responses to a Mail Survey.” Public Opinion Quarterly 42 (3): 407. https://doi.org/10.1086/268464.
Google Scholar
Schaefer, David R., and Don A. Dillman. 1998. “Development of a Standard E-Mail Methodology: Results of an Experiment.” Public Opinion Quarterly 62 (3): 378. https://doi.org/10.1086/297851.
Google Scholar
Scherpenzeel, Annette, and Vera Toepoel. 2012. “Recruiting a Probability Sample for an Online Panel: Effects of Contact Mode, Incentives, and Information.” Public Opinion Quarterly 76 (3): 470–90. https://doi.org/10.1093/poq/nfs037.
Google Scholar
Singer, Eleanor. 2002. “The Use of Incentives to Reduce Nonresponse in Household Surveys.” In Survey Nonresponse, edited by R. M. Groves, D. A. Dillman, J. L. Eltinge, and R. J. A. Little, 163–77. New York: Wiley.
Google Scholar
Singer, Eleanor, and Cong Ye. 2013. “The Use and Effects of Incentives in Surveys.” The Annals of the American Academy of Political and Social Science 645 (1): 112–41. https://doi.org/10.1177/0002716212458082.
Google Scholar
Stanley, Marshica, Jessica Roycroft, Ashley Amaya, Jill A. Dever, and Anup Srivastav. 2020. “The Effectiveness of Incentives on Completion Rates, Data Quality, and Nonresponse Bias in a Probability-Based Internet Panel Survey.” Field Methods 32 (2): 159–79. https://doi.org/10.1177/1525822x20901802.
Google Scholar
Toepoel, Vera. 2017. “Online Survey Design.” In The SAGE Handbook of Online Research Methods, 184–202. Los Angeles, CA: SAGE Publications Ltd. https://doi.org/10.4135/9781473957992.n11.
Google Scholar
Trussell, N., and P. J. Lavrakas. 2004. “The Influence of Incremental Increases in Token Cash Incentives on Mail Survey Response: Is There an Optimal Amount?” Public Opinion Quarterly 68 (3): 349–67. https://doi.org/10.1093/poq/nfh022.
Google Scholar
Warriner, Keith, John Goyder, Heidi Gjertsen, Paula Hohner, and Kathleen McSpurren. 1996. “Charities, No; Lotteries, No; Cash, Yes: Main Effects and Interactions in a Canadian Incentives Experiment.” Public Opinion Quarterly 60 (4): 542–62. https://doi.org/10.1086/297772.
Google Scholar
Yan, Alan, Joshua Kalla, and David E. Broockman. 2018. “Increasing Response Rates and Representativeness of Online Panels Recruited by Mail: Evidence from Experiments in 12 Original Surveys.” Stanford University Graduate School of Business Research Paper No. 18-12. https://doi.org/10.2139/ssrn.3136245.
Google Scholar
Yu, Shengchao, Howard E. Alper, Angela-Maithy Nguyen, Robert M. Brackbill, Lennon Turner, Deborah J. Walker, Carey B. Maslow, and Kimberly C. Zweig. 2017. “The Effectiveness of a Monetary Incentive Offer on Survey Response Rates and Response Completeness in a Longitudinal Study.” BMC Medical Research Methodology 17 (1). https://doi.org/10.1186/s12874-017-0353-1.
Google ScholarPubMed CentralPubMed

Powered by Scholastica, the modern academic journal management system