Physician surveys are an important tool in obtaining information on healthcare-related attitudes, beliefs, and practices (McLeod et al. 2013), but achieving a high response rate for this population is challenging (Cho, Johnson, and VanGeest 2013; Thorpe et al. 2009; VanGeest, Johnson, and Welch 2007). Given the potential for low response rates in this population (Asch, Jedrziewski, and Christakis 1997; Cummings, Savitz, and Konrad 2001), understanding research strategies that facilitate adequate response rates in physician surveys is critical to strengthen generalizability and to reduce bias. A recent review has found that incentives, repeated follow-up, and data collection mode are important factors related to response rates in general as well as for this target population (Dillman 2015; Dillman, Smyth, and Christian 2014; Field et al. 2002).
The application of Dillman’s Tailored Design Method has been successful in achieving a high response rate (range 56%–86%) from healthcare providers on a variety of health topics (Abatemarco, Steinberg, and Delnevo 2007; Delnevo, Abatemarco, and Steinberg 2004; Ferrante et al. 2008, 2009; Steinberg et al. 2011). More recently, other strategies have been explored for healthcare provider surveys to combat declining response rates including web-based and mixed mode surveys (Millar and Dillman 2011). Web-based survey delivery to physicians offers immediate survey delivery, real time data, and inexpensive costs (Cunningham et al. 2015). Despite these advantages, studies have shown that physician response rates of web-only surveys are lower than mail surveys (Dillman 2015; Lusk et al. 2007). In mixed-mode survey methods, one mode of response is initially offered followed by an alternate mode in a follow-up contact (Millar and Dillman 2011). While mixed-mode surveys have been shown to facilitate response rates (Beebe et al. 2007), challenges remain in adapting this method to the physician population. Indeed, mail surveys tend to have higher responses rates for the physician population compared with online or mixed-mode surveys (Cho, Johnson, and VanGeest 2013). Furthermore, alternate modes of data collection, such as online, are problematic given there is no comprehensive list of physician emails to function as a sampling frame (Braithwaite et al. 2003), and while e-mail addresses are available in the American Medical Association Physician Masterfile, they are not included for all physicians and may be inaccurate (Klabunde et al. 2012). Until there are comprehensive sampling frames of physician email addresses, mail delivery will remain a dominant sampling approach.
Recent work in survey methodology has explored web-push mail surveys, which may yield higher response rates than paper-only survey delivery (McMaster et al. 2017). In web-push surveys the initial correspondence contains a postal request to respond by web. Follow-up mailings include a reminder letter to complete the survey online followed by a paper questionnaire. Offering a paper survey after the initial web request has been shown to increase response rates. The extent to which this method could work for physician surveys is unknown— as such, this split sample experiment aims to look at differences in response rates between mail-only delivery and a web-push delivery in a sample of physicians.
This survey experiment was embedded in a larger national, repeated cross-sectional mail survey of physicians focused on attitudes and beliefs regarding tobacco use, smoking cessation and electronic cigarettes (Singh et al. 2017; Steinberg, Giovenco, and Delnevo 2015). In brief, the sampling frame for the national study was compiled from the American Medical Association’s Physician Masterfile, via the vendor Medical Marketing Service; 3,000 board-certified practicing physicians, equally distributed across six specialties: family medicine, internal medicine, obstetrics/gynecology (ob/gyn), cardiology, pulmonary/critical care, and hematology/oncology, were randomly selected. From the national sample, we then randomly selected 1,000 physicians and then randomly assigned them to one of two survey mode conditions: traditional mail or web-push survey. Data collection occurred between Febuary and April 2018; data were analyzed March 2019.
The first mailing contained an introductory cover letter, which differed by mode condition, and a $25 Starbucks® gift card; for those assigned to the mail survey mode, they also received a paper copy of the survey, while those assigned to the web-push mode were given instructions on how to complete the web survey in the cover letter (i.e., the survey URL was provided with an anonymous login code). One week after the first mailing, we made a second mailing contact via postcard reminders to nonrespondents; the web-push mode postcard contained the survey URL and login code. The third mailing to nonresponders mirrored the first contact, minus the gift card. The fourth mailing to all nonresponders was a mixed mode and included a paper survey as well as a cover letter with instructions on how to complete the web survey. As such, nonresponders in either condition were able to choose their survey mode (i.e., a web-push nonrespondent could submit a paper survey and a mail-survey mode nonrespondent could now do the survey online).
Overall response rates were calculated using the American Association for Public Opinion Research’s response rate 3 calculation (Smith 2009), which estimates the proportion of cases of unknown eligibility that are eligible using data gathered during fielding. For example, if a survey had 100 completes and 50 ineligibles, the eligibility rate among nonresponders is assumed to be 66% (i.e., 100 divided by 150). We examined responses rates by mode condition after each physician contact as well as final response rates by demographics.
Overall, 362 physicians responded to the survey and were equally distributed across the two survey modes. A total of 158 cases were determined to be ineligible (i.e., death, retirement, no active medical license in state, not board certified, not providing outpatient care). More ineligible cases were identified in the web-push condition (83 vs. 75). Of note, the first question on the survey screened for eligibility (i.e., providing outpatient care) and for the web-push condition, the survey terminated with a “no” response, whereas for the traditional paper survey, the participant would need to indicate “no” and mail the survey back. The small differences for “ineligibles” produces slightly different overall response rates, despite the same number of completes for each group.
As shown in Table 1, response rates did not significantly differ for the two data collection modes overall or within demographic subgroups, with both modes achieving a response rate just over 50%. Females had slightly higher response rates in both modes compared with males, but the web-push condition performed slightly better. Differences in response rate are noted across speciality groups, with the highest response rates among family and internal medicine and the lowest among cardiology. Slightly higher responses rates for the web-push condition were noted for family medicine and oncology, whereas slightly higher response rates were noted for the traditional mail survey condition for ob/gyns. Response rates overall were highest for those under the age of 45 and declined by age for the web-push mode. The traditional mail survey mode performed slightly better for those 65 years of age or older.
Figure 1 depicts cumulative survey completion for the two data collection modes. Differences between the two modes are noted within the first three weeks of fielding, such that the web-push yielded greater completions than the mail survey, after which the mail survey conditions catches up and keeps pace with the web-push condition. While, ultimately, the two yielded nearly identical response rates, the shorter response time for the web-push condition yielded efficiencies, such that there were fewer follow-up contacts in the web-push condition. Indeed, a total of 1,714 mailings were prepared for the web-push condition versus 1,781 for the traditional mail survey. In addition, as noted previously, we were able to identify a greater number of ineligible respondents in the web-push condition.
When looking at an economic evaluation comparing the costs involved for each mode (Table 2), we found that the total cost for the traditional mail survey was 25% higher than web-push. The main sources of savings included printing costs of the paper survey, postage, and labor costs associated with assembling the mailings and data entry. With the number of completes being equal between the two modes, this resulted in a cost per complete of $31.70 for the mail survey compared with $23.80 for the web-push.
To our knowledge, this is the first study examining the feasibility of a web-push approach for physician surveys. Overall, we found that the web-push data collection mode obtained about the same response rate as the traditional paper and pencil mail survey. Moreover, this was without notable differences by demographics with a few exceptions. Physicians 65 years of age or older and ob/gyns had lower response rates for the web-push condition compared with the mail-only condition. This is consistent with prior research which has found higher response rates among younger physicians for web-based surveys (Ferrante et al. 2009). In our study, we found higher response rates for females in both delivery modes, and this finding held within most specialties. Previous literature examining differences in physician response rates by gender for mail and web surveys have found mixed results. While some studies find no differences in survey response rate by gender (Kellerman and Herold 2001), others have noted higher survey response rates among male physicians (Braithwaite et al. 2003; Cunningham et al. 2015), while others have documented higher rates among female physicians (Cull et al. 2005; Delnevo, Abatemarco, and Steinberg 2004).
When comparing the response rates of both modes, we see that web-push, when combined with the Tailored Design Method, is capable of achieving a comparable response rate to mail-only delivery, with notable efficiencies. The shorter response time for the web-push delivery allowed for a more prompt response time and identification of ineligibles proceeding to fewer follow-ups. Additional costs for the mail-only delivery, including cost of printing the paper survey, postage for the return envelope, and time taken for data entry, resulted in the web-push delivery being more cost-effective. This finding is consistent with previous research which found web-based survey delivery to be more cost efficient among physicians (Scott et al. 2011).
Limitations to note include the small sample size for specific subgroups (e.g., females) which did not yield statistically significant differences, despite the fact that there were meaningful differences between the two survey modes for females and those 65 years of age or older. Likewise, caution should be considered when viewing the lower response rate for cardiologists. In our sample, males and those 65 years of age or older who overall had lower response rates were overrepresented in this speciality. As such, the lower response could be partially explained by these demographic differences.
With survey reponse rates among physicians continuing to decline, it is important to explore alternative delivery methods among this population. This study found that implementing a web-push delivery design was more cost-effective and resulted in comparable response rates than mail-only delivery. For these reasons, the remaining sample of 2,000 physicians for our national survey was fielded using web-push delivery, which achieved an overall response rate above 50%, a notable accomplishment in the context of the challenges with physician surveys. Further research should continue exploring the effectiveness of web-push delivery on a variety of health topics and healthcare provider populations.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
The authors declare that they have no competing interests.
The study described was supported by grant number R01CA190444 from the National Institute of Health. The content does not necessarily represent the official views of the National Institute of Health.
All authors participated in designing and developing the study instruments and procedures. All authors assisted with drafting of the paper and interpreting results. CD is the principal investigator of the grant.
Author Contact Information
Cristine D. Delnevo, PhD, MPH
Rutgers Center for Tobacco Studies
303 George Street
New Brunswick, NJ 08901