Loading [Contrib]/a11y/accessibility-menu.js
Skip to main content
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Blog
  • Subscribe
  • search

RSS Feed

Enter the URL below into your favorite RSS reader.

http://localhost:21432/feed
Articles
Vol. 2, Issue 2, 2009January 31, 2009 EDT

The Costs of Using Pre-Paid Incentives in a Physician Survey

Sean O. Hogan,
survey practice
https://doi.org/10.29115/SP-2009-0009
Survey Practice
Hogan, Sean O. 2009. “The Costs of Using Pre-Paid Incentives in a Physician Survey.” Survey Practice 2 (2). https:/​/​doi.org/​10.29115/​SP-2009-0009.
Save article as...▾

View more stats

Abstract

The Costs of Using Pre-Paid Incentives in a Physician Survey

The use of pre-paid monetary incentives in surveys of respondents with specialized knowledge, such as business executives and physicians, has been shown to help reduce overall survey administration costs. Current research suggests that by offering a cash incentive to respondents at the outset of data collection, survey managers may reduce the need for follow-up efforts. This paper uses the results of a survey of office-based physicians in the United States to put a finer point to another, perhaps overlooked component of this “cost saving.” Respondents and non-respondents alike often choose not to cash the checks. Almost all sample members who decline to complete a survey forego cashing the incentive check. About 1/3 of the physicians who responded, also did not cash their checks.

This paper addresses two questions. The first question is: to what extent are pre-paid cash incentives misdirected to sample members who cash the check, but do not respond to the survey? A secondary question is: which physician specialties accept pre-paid monetary incentive? The answers to these questions have practical value. Survey managers need to anticipate the extent to which their funds (pre-paid incentives) will be misdirected to non-cooperating or ineligible sample members. Similarly, having a foundation on which to anticipate this risk can inform the proposed price of service to clients. Since this project sampled a variety of medical specialties, the results allow one to anticipate need for funds with some granularity, in that the results allow us to identify which medical specialties were most inclined to cash checks.

Background

Research has explored both the practical implications of using incentives as it relates to response rates (Armstrong 1975; James and Bolstein 1992; Singer, Groves, and Corning 1999) and the promptness with which physician-respondents complete a survey (Berry and Kanouse 1987). Jobber et al. (2004) and Kellerman and Herold (2001) suggest that incremental increases in incentives improve responsiveness to surveys among specialized populations such as physicians and business executives. Gunn and Rhodes (1981) experimented with $0, $25 and $50 incentives and found that the level of responsiveness increased with the value of the payments. Similarly Mizes et al. tested $0, $1 and $5 gifts on a survey of 200 physicians (1984). They too concluded that the responsiveness increased with the value of incentives. In other words, incentives represent reciprocity between researchers and subjects (Gendall, Hoek, and Brennan 1998).

This increase in responsiveness comes with negligible effect on data quality (for example (Cantor et al. 1997; Cycyota and Harrison 2002; DeNelvo, Abatemarco, and Steinberg 2004; Doody et al. 2003; Mizes, Fleese, and Roos 1984; Tambor et al. 1993)).

Cash incentives are therefore easily justified as they not only improve response rates, but experimental research indicates they have negligible effects on data quality. Of significance, pre-payed incentives help reduce overall administration costs by reducing the need for mail and telephone follow-up contact (Berry and Kanouse 1987).

Methods

Sampling: A stratified random sample of 1728 office-based physicians from all 50 of the United States was selected for this study. Physicians were stratified by specialty, urban and non-urban practice setting and Census region. The sampled medical specialties included family and general practice, internal medicine, and cardiology. We began with equal numbers (576) of each specialty. Urbanicity was defined by whether the sampled physician’s office ZIP code is in a metropolitan statistical area or non-metropolitan statistical area. The Census regions are Northeast, Midwest, South and West. The SK & A Information Office Based Physician file was used to draw the sample. The SK & A file is a national profile of office-based physicians in the United States.

Data collection: Data for this study were collected by RTI International on behalf of the Centers for Medicare and Medicaid Services (CMS), a division of the U.S. Department of Health and Human Services. Data were acquired through the web survey, by mail, over the telephone and by fax. Study procedures included a five-wave mailing, telephone prompting and an out-bound facsimile. All mailings used CMS stationery and bore the signature of the agency’s chief medical officer. The first wave mailing was sent at the end of the first week of January 2006. It informed the respondent of the sponsorship and purpose of the study; it mentioned that the study had been endorsed by three medical societies (American Academy of Family Physicians, the American College of Cardiology or the American College of Physicians); it mentioned that an incentive would be offered and assured the respondent of confidentiality.

The second mailing was sent three business days later, with the 8-page survey, supporting materials, a $25 check and reply envelope. These same materials were sent again to non-respondents at the sixth, tenth and twelfth weeks of data collection. A reminder post card was sent during the third week of data collection. Prompting calls were made during the fourth through fifth weeks.

Results

A total of 1027 complete surveys were returned. After eliminating 111 dead and ineligible cases (6.4%) the survey resulted in an overall response rate of 63.5% of the eligible sample. Family physicians were most cooperative, with 69% responding. Internists were next, with 65% and cardiologists were least cooperative, with a response rate of 54%. We received 29 cases with tracking numbers removed and they are not analyzed.

A total 752 (43.5%) of all 1728 incentives checks were cashed. Table 1 indicates that 90% of those who cashed a check also cooperated in the survey. Slightly more than 8% of the checks cashed were cashed by non-respondents and 2% of the checks cashed were cashed on behalf of ineligible respondents. Of those who did not cash a check, three-fifths (62%) did not respond to the survey. Interestingly, a large minority (35%) of those who did not cash a check did participate in the survey.

Table 1  Check cashing activity by survey participation.
Sample members Responded Non-respondent Refused Ineligible
Cashed (% of all those cashing) 673 (89.5%) 62 (8.2%) 1 (0.1%) 16 (2.1%)
Did not cash (% of all those not cashing) 344 (35.2%) 520 (53.3%) 17 (1.7%) 95 (9.7%)

Pearson chi-squared test for significance=0.000.

Table 2 summarizes check cashing activity by medical specialty among eligible sample members who completed the survey. Family practitioners (71%) and internists (71%) cashed checks in nearly equal proportions. Meanwhile, a lower proportion of cooperative cardiologists (57%) cashed their checks.

Table 2  Check cashing activity among respondents by medical specialty.
Specialty Cashed Not cashed
Family practitioners 262 (71.4%) 105 (28.6)
Internists 246 (70.9%) 101 (29.1)
Cardiologists 161 (56.7%) 123 (43.3%)

Pearson chi-squared test for significance=0.000.

Among the eligible non-respondents, 10% of the non-responding cardiologists cashed a check; 8% of non-responding internists cashed the stipend and 9% of non-responding family practitioners accepted the honoraria. One may surmise that in some cases, office staff deposited the check with other bank transactions without consulting the sampled physician.

Discussion

Prior research has demonstrated that pre-paid incentives help control survey administration costs by reducing follow-up effort needed to reach satisfactory response levels. It is generally assumed that exchange theory explains this (see Gunn and Rhodes). This paper challenges that assumption, since no “exchange” is finalized when a majority of respondents decline to accept the payment. Perhaps physician-respondents are moved by what the incentive symbolizes: the importance of the survey data to the researcher (Berry and Kanouse). Perhaps incentives represent only one of several cues that symbolize a survey’s importance (Heberlein and Baumgartner 1978). Whatever the reason, experimental research shows that the presence of pre-paid incentives brings about their desired effect.

Virtually all (90%) of the cashed checks were cashed by physicians who also responded to the survey. Check cashing activity among cooperating physicians varied by medical specialty. Only 10% of the checks that were cashed were cashed by ineligible or non-responding sample members. Physicians who do not respond to surveys tend not to cash the pre-paid checks, meaning a small proportion of a survey project’s budget is misdirected in this way.

For this project, advance payment of cash incentives placed a total of 4.6% of the incentive budget in the hands of non-participating physicians. This risk can be balanced against the checks of responding physicians who did not cash the checks. In our case, the value of un-cashed checks from eligible respondents equaled $8,600, well above the $1,975 paid out to ineligible cases and non-respondents.

Berry and Kanouse (1987) indicate that survey administration costs are lower when incentives are used, because less follow-up contact is needed to acquire sufficient response rates. Coupled with this study, one may infer that survey managers can realize additional cost benefits by employing pre-paid cash incentives in a survey of physicians. Given that only a fraction of responding physicians cashed their checks, exchange theory is only a partial explanation of the motivating influence of pre-paid incentives.

References

Armstrong, J.S. 1975. “Monetary Incentives in Mail Surveys.” Public Opinion Quarterly 39:111–16.
Google Scholar
Berry, S.H., and D.E. Kanouse. 1987. “Physician Response to a Mailed Survey.” Public Opinion Quarterly 51:102–14.
Google Scholar
Cantor, D., B. Allen, P. Cunningham, J.M. Brick, R. Slobasky, P. Giambo, and G. Kenny. 1997. “Promised (1997) Incentives on a Random Digit Dial Survey.” In Non-Response in Survey Research, Proceedings of the Eighth International Workshop on Household Survey Non-Response, edited by A. Koch and R. Porst. Mannheim, Germany: ZUMA.
Google Scholar
Cycyota, C.S., and D.A. Harrison. 2002. “Enhancing Survey Response Rates at the Executive Level: Are Employee- or Consumer-Level Techniques Effective?” Journal of Management 28 (2): 151–76.
Google Scholar
DeNelvo, C.D., D.J. Abatemarco, and M.D. Steinberg. 2004. “Physician Response Rates to a Mail Survey by Specialty and Timing of Incentive.” American Journal of Preventative Medicine 26 (3): 234–36.
Google Scholar
Doody, M.M., A.S. Sigurdson, D. Kampa, K. Chimes, B.H. Alexander, E. Ron, R.E. Tarone, and M.S. Linet. 2003. “Randomized Trial of Financial Incentives and Delivery Methods for Improving Response to a Mailed Questionnaire.” American Journal of Epidemiology 157:643–51.
Google Scholar
Gendall, P., J. Hoek, and M. Brennan. 1998. “The Tea Bag Experiment: More Evidence on Incentives in Mail Surveys.” Journal of the Market Research Society 40 (4): 347–51.
Google Scholar
Gunn, W.J., and I.N. Rhodes. 1981. “Physician Response Rates to a Telephone Survey: Effects of Monetary Incentive Level.” Public Opinion Quarterly 45:109–15.
Google Scholar
Heberlein, T.A., and R. Baumgartner. 1978. “Factors Affecting Response Rates to Mailed Questionnaires: A Quantitative Review of the Published Literature.” American Sociological Review 43:447–62.
Google Scholar
James, J.M., and R. Bolstein. 1992. “Large Monetary Incentives and Their Effect on Mail Survey Response Rates.” Public Opinion Quarterly 56:442–53.
Google Scholar
Jobber, D., J. Saunders, and V.W. Mitchell. 2004. “Prepaid Monetary Incentive Effects on Mail Survey Response.” Journal of Business Research 57:347–50.
Google Scholar
Kulka, R.A. 1999. “Providing Respondent Incentives in Federal Statistical Surveys: The Advance of the Real ‘Phantom Menace’?” Presented at the Washington Statistical Society, June 10, 1999.
Mizes, S.J., E.L. Fleese, and C. Roos. 1984. “Incentives for Increasing Return Rates: Magnitude Levels, Response Bias and Format.” Public Opinion Quarterly 48:794–800.
Google Scholar
Sheatsley, P.B., and J.D. Loft. 1981. “On Monetary Incentives to Respondents.” Public Opinion Quarterly 45:571–72.
Google Scholar
Singer, E., R.M. Groves, and A.D. Corning. 1999. “Differential Incentives: Beliefs about Practices, Perceptions of Equity, and Effects on Survey Participation.” Public Opinion Quarterly 63:251–60.
Google Scholar
Tambor, E.S., G.A. Chase, R.R. Faden, G. Geller, K.J. Hofman, and N.A. Holtzman. 1993. “Improving Response Rates through Incentive and Follow-Up.” American Journal of Public Health 83 (11): 1599–1603.
Google Scholar

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Powered by Scholastica, the modern academic journal management system