Loading [Contrib]/a11y/accessibility-menu.js
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • search

    Sorry, something went wrong. Please try your search again.
    ×

    • Articles
    • Blog posts

RSS Feed

Enter the URL below into your favorite RSS reader.

https://www.surveypractice.org/feed
×
Articles
October 27, 2022 EDT

The Impact of Varying Incentives on Physician Survey Response Rates: An experiment in the context of COVID-19

William J. Young, Michelle T. Bover Manderski, Binu Singh, Cristine D. Delnevo,
survey methodolgy response rates physician surveys incentives
• https://doi.org/10.29115/SP-2022-0012
Photo by Maxime on Unsplash
Survey Practice
Young, William J., Michelle T. Bover Manderski, Binu Singh, and Cristine D. Delnevo. 2022. “The Impact of Varying Incentives on Physician Survey Response Rates: An Experiment in the Context of COVID-19.” Survey Practice, October. https://doi.org/10.29115/SP-2022-0012.
Save article as...▾
  • PDF
  • XML
  • Citation (BibTeX)
Data Sets/Files (2)
Download all (2)
  • Table 1. Response Rates by Incentive Level and Demographic Variablesa
    Download
  • Table 2. Impact of Incentives on Cost, Reminders, and Mode of Completion
    Download

Sorry, something went wrong. Please try again.

If this problem reoccurs, please contact Scholastica Support

Error message:

undefined
×

View more stats

Abstract

Since 2018, our research team has fielded national, probability surveys of internal medicine physicians. We expected our usually high response rates to fall in the 2021 iteration of the survey due to challenges related to the COVID-19 pandemic and our inability to offer all participants a $50 upfront incentive as we had previously. To understand the independent impacts of the lower incentive and surveying physicians in the context of the pandemic, we fielded an experiment varying the incentive amount physicians were offered. Our results suggest that while higher incentives still lead to higher response rates during COVID-19, the higher incentive did not achieve comparable pre-COVID response rates. We conclude with additional data on the impact of the incentives on survey cost, number of reminders needed, and the mode in which respondents chose to complete the survey.

Introduction

The Rutgers Center for Tobacco Studies has surveyed internal medicine physicians (IMPs) since 2018 on various issues related to tobacco use (Delnevo and Singh 2021). Despite often-cited difficulties in obtaining responses from physicians (Cho, Johnson, and VanGeest 2013; Thorpe et al. 2009; VanGeest, Johnson, and Welch 2007), our first two surveys achieved overall response rates of 62.6% (2018) and 59.3% (2019), due in part to the offer of a $50 Starbucks gift card as an upfront incentive.

Wave 3 presented several challenges. First, based on the results of an incentive experiment in Wave 1, in Wave 2 we offered all respondents a $50 Starbucks gift card. However, limited resources prevented us from offering all respondents the $50 incentive in Wave 3. Second, this wave was the first taking place during COVID-19, when physicians confronted a new, quickly evolving public health crisis. Many began seeing patients remotely due to the closure of outpatient practices, and we were unsure if our sample would receive their invitations to participate. In this context, the salience of tobacco issues was likely dwarfed by other concerns. For us, COVID precautions meant our mail (including returned survey invitations) was not delivered directly to us but rather to a centralized university mailroom where we had to retrieve it. All of this led us to expect lower response rates. We wanted to know, however, the independent impacts of the lower incentive and the COVID-19 context. We therefore fielded an experiment to find out.

An extensive literature on the use of incentives in surveys demonstrates that offering incentives, especially upfront, unconditional incentives, boosts response rates in general and among physicians in particular (e.g., Delnevo, Abatemarco, and Steinberg 2004; Singer and Ye 2013; Dillman and Christian 2014). The literature on physician surveys indicates that higher incentives are generally more effective at producing higher response rates (Gunn and Rhodes 1981; Mizes, Fleece, and Roos 1984; Asch, Christakis, and Ubel 1998; Kellerman, Scott, and Herold 2001). Given our limited resources in Wave 3, we could only offer half of our sample the regular $50 incentive. To the other half, we could offer $25. We therefore hypothesized that we would see higher response rates in the group receiving the $50 incentive compared to the group receiving $25. Previous research has shown that the context in which a survey is fielded can affect response rates (e.g., Johnson et al. 2006) and that COVID-19 specifically has led to lower response rates (Bates and Zamadics 2021). We therefore further hypothesized that, due to the pandemic and the special challenges it presented to physicians, even the $50 group would have response rates lower than in previous waves.

Methods

Five hundred IMPs were randomly selected from the American Medical Association’s Physician Masterfile. With the first mailed invitation to complete the survey anonymously online, physicians were randomly assigned to receive an upfront incentive of either a $50 or $25 Starbucks gift card. There were no significant differences in incentive condition by age or sex. Three additional reminders were mailed, each 10 days apart; the final mailing contained a paper version of the survey and a prepaid return envelope.

These procedures differed somewhat from previous waves of the study. In Wave 1, most respondents completed the survey via paper and pencil, though some were randomized to a web-push condition. In that wave, we also experimentally evaluated the impact of differing incentive amounts ($25 or $50) to different retailers. In Wave 2, all respondents were recruited via a web-push approach and incentivized with a $50 Starbucks gift card. In that survey, 750 IMPs were randomly selected to be invited to the survey.

The survey was offered in English and consisted of 40 items. Data were collected between May and July 2021. We calculated the American Association for Public Opinion Research Response Rate 3 (American Association for Public Opinion Research 2016) to compare the impact of the different incentives.

Results

Table 1.Response Rates by Incentive Level and Demographic Variablesa
$25 incentive (Original N=250) $50 incentive (Original N=250)
RR3b N RR3b N
Overall 48.0% 64 52.4% 77
Age
Under 45 53.7% 7 70.0% 13
45-54 42.6% 18 53.1% 27
55-64 43.1% 18 38.3% 19
65+ 54.0% 21 50.8% 18
Sex
Male 46.8% 43 49.4% 43
Female 50.0% 21 57.4% 34

aDemographic information extracted from sampling frame
bAAPOR response rate 3

We obtained a response rate of 50.2%, considerably lower than previous waves (62.6% in 2018 and 59.3% in 2019). In line with our hypotheses, Table 1 suggests this was partially due to the lower incentive, as the group offered $25 only achieved a response rate of 48.0% compared to 52.4% among the $50 incentive group. Even the group offered $50 had a response rate substantially lower than previous years.

We observed striking differences by age and gender, the two demographic variables available in our sampling frame. IMPs under 45 had a 16 percentage point higher response rate when offered $50, and among those 45-54 the difference was nearly 11 percentage points. In contrast, participants 55 and older responded at higher rates to the lower incentive. Although both male and female IMPs responded at higher rates to the larger incentive, the difference was greater for females (over 7 percentage points).

Of course, for survey practitioners who may field physician surveys during COVID-19, there are considerations beyond response rates that are important at the survey design stage. Table 2 therefore offers further information on the impact of the different incentive amounts on survey cost, the number of reminders required to obtain a response, and the mode of completion.

Table 2.Impact of Incentives on Cost, Reminders, and Mode of Completion
$25 incentive (original N = 250) $50 incentive (original N = 250) P-value
Total completes 64 77
 
Cost
Incentives $6,250 $12,500 --
Mailings/paper supplies $3,620 $3,517 --
Overall $9,870 $16,017 --
Per complete $154.22 $208.01 --
 
Avg. no. of reminders 2.3 1.8 .029a
Mode of completion .220b
Online 88% 94%
Paper 13% 6%

aBased on independent samples T-test with equal variances not assumed
bBased on chi-square test

When it comes to the cost of the survey, the $50 incentive condition costed significantly more than the $25 condition. Indeed, for a 35% increase in cost per complete and a 62% increase in overall cost, the $50 condition only resulted in a 9% (4 percentage point) increase in response rate. Survey practitioners interested in surveying physicians in the context of COVID-19 should be aware of this dynamic when choosing the incentive amount to offer during recruitment.

Across the two incentive levels, fewer reminders were needed, on average, in the $50 incentive group compared to the $25 incentive group (1.8 vs. 2.3). Meanwhile, the percentage of respondents who chose to complete the paper version of the survey did not differ statistically from the percentage of respondents who opted to complete the survey online across the two groups.

Discussion

In line with previous literature and our hypotheses, we found that IMPs responded at higher rates to the higher incentive amount. While the lower incentive affected response rates, however, even the group offered the original amount failed to achieve response rates comparable to previous years. We attribute this to the pandemic.

This study has some limitations. First, while the samples assigned to the different incentive conditions did not differ from one another in terms of age or sex, the two demographic variables to which we had access in the sampling frame, there may be other variables that we were unable to measure on which the samples were imbalanced. Second, while the internal validity of the study is high given random assignment of respondents to incentive levels, we do not have population benchmarks to which to compare our sample. However, given that only small amounts of variance on potentially moderating variables are necessary to estimate valid, generalizable treatment effects (Druckman and Kam 2011), we are confident in the generalizability of our findings. Third, given that our survey administration procedures were not exactly the same across waves, it is possible that differences unrelated to COVID-19 explain some of the variation in response rates across waves. Finally, even before COVID, response rates to surveys had been declining. We cannot rule out that some of the decline in response rates on which we report here is due to that ongoing trend.

While COVID-19 has made it more difficult to survey physicians, understanding their attitudes and opinions is perhaps more important now than ever before. It therefore behooves researchers to continue investigating how to increase physician responses in this challenging new context.

Submitted: March 01, 2022 EDT

Accepted: September 14, 2022 EDT

References

American Association for Public Opinion Research. 2016. Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 9th ed. AAPOR.
Google Scholar
Asch, David A., Nicholas A. Christakis, and Peter A. Ubel. 1998. “Conducting Physician Mail Surveys on a Limited Budget: A Randomized Trial Comparing $2 Bill versus $5 Bill Incentives.” Medical Care 36 (1): 95–99. https://doi.org/10.1097/00005650-199801000-00011.
Google Scholar
Bates, Nancy, and Joe Zamadics. 2021. “COVID-19 Infection Rates and Propensity to Self-Respond in the 2020 U.S. Decennial Census.” Survey Practice 14 (1): 1–12. https://doi.org/10.29115/sp-2021-0002.
Google Scholar
Cho, Young Ik, Timothy P. Johnson, and Jonathan B. VanGeest. 2013. “Enhancing Surveys of Health Care Professionals: A Meta-Analysis of Techniques to Improve Response.” Evaluation & the Health Professions 36 (3): 382–407. https://doi.org/10.1177/0163278713496425.
Google Scholar
Delnevo, Cristine D., Diane J. Abatemarco, and Michael B. Steinberg. 2004. “Physician Response Rates to a Mail Survey by Specialty and Timing of Incentive.” American Journal of Preventive Medicine 26 (3): 234–36. https://doi.org/10.1016/j.amepre.2003.12.013.
Google Scholar
Delnevo, Cristine D., and Binu Singh. 2021. “The Effect of a Web-Push Survey on Physician Survey Responses Rates: A Randomized Experiment.” Survey Practice 14 (1): 1–9. https://doi.org/10.29115/sp-2021-0001.
Google ScholarPubMed CentralPubMed
Dillman, Don A.Jolene D.Smyth, and Leah Melani Christian. 2014. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method. 4th ed. Hoboken, NJ: John Wiley & Sons.
Google Scholar
Druckman, James N., and Cindy D. Kam. 2011. “Students as Experimental Participants: A Defense of the ‘Narrow Data Base.” In Cambridge Handbook of Experimental Political Science, edited by James N. Druckman, Donald P. Green, James H. Kuklinski, and Arthur Lupia, 41–57. New York: Cambridge University Press.
Google Scholar
Gunn, Walter J., and Isabelle N. Rhodes. 1981. “Physician Response Rates to a Telephone Survey: Effects of Monetary Incentive Level.” Public Opinion Quarterly 45 (1): 109. https://doi.org/10.1086/268638.
Google Scholar
Johnson, Timothy P., Young Ik Cho, Richard T. Campbell, and Allyson L. Holbrook. 2006. “Using Community-Level Correlates to Evaluate Nonresponse Effects in a Telephone Survey.” Public Opinion Quarterly 70 (5): 704–19. https://doi.org/10.1093/poq/nfl032.
Google Scholar
Kellerman, S., E. Scott, and Joan Herold. 2001. “Physician Response to Surveys A Review of the Literature.” American Journal of Preventive Medicine 20 (1): 61–67. https://doi.org/10.1016/s0749-3797(00)00258-0.
Google Scholar
Mizes, J. Scott, E. Louis Fleece, and Cindy Roos. 1984. “Incentives for Increasing Return Rates: Magnitude Levels, Response Bias, and Format.” Public Opinion Quarterly 48 (4): 794. https://doi.org/10.1086/268885.
Google Scholar
Singer, Eleanor, and Cong Ye. 2013. “The Use and Effects of Incentives in Surveys.” The ANNALS of the American Academy of Political and Social Science 645 (1): 112–41. https://doi.org/10.1177/0002716212458082.
Google Scholar
Thorpe, C., B. Ryan, S. L. McLean, A. Burt, M. Stewart, J. B. Brown, G. J. Reid, and S. Harris. 2009. “How to Obtain Excellent Response Rates When Surveying Physicians.” Family Practice 26 (1): 65–68. https://doi.org/10.1093/fampra/cmn097.
Google Scholar
VanGeest, Jonathan B., Timothy P. Johnson, and Verna L. Welch. 2007. “Methodologies for Improving Response Rates in Surveys of Physicians: A Systematic Review.” Evaluation & the Health Professions 30 (4): 303–21. https://doi.org/10.1177/0163278707307899.
Google Scholar

Powered by Scholastica, the modern academic journal management system