Introduction
Survey response rates are generally declining, particularly for surveys of young adults (Sheldon et al. 2007) and in settings with poor contact information (Brick and Williams 2012). The emergency department (ED) setting has both features, making this patient population particularly challenging to survey.
One recent survey of ED patients in 50 hospitals obtained low response rates (14%–22%) for single-mode mail-only and telephone-only administration but a higher response rate (29%) for a two-stage sequential mixed mode (SMM) of a mailed survey with telephone follow-up (Parast et al. 2019). Other work has tested nontraditional modes in the ED setting, including on-site electronic-tablet-based survey administration to assess patient experience (The Beryl Institute 2012). While promising, this type of real-time administration might only be feasible for short surveys due to time constraints, including the desire of patients to quickly leave the ED (Tuality Healthcare 2013).
Another alternative method for surveying ED patients is through an online survey. Online surveys have been known to reach younger populations (a low-response-rate population) than mail-only administration (Kaplowitz, Hadlock, and Levine 2004). Beyond the ED setting, efforts to increase survey response rates and decrease costs have focused largely on SMM approaches that incorporate an online survey, such as an online survey with follow-up by telephone (de Leeuw, Edith Desiree and Walt de Heer 2002; Groves 2004) or a mailed survey (McMaster et al. 2017). SMM research in a variety of settings has found good response rates and representativeness (de Leeuw, Edith Desiree 2005; Dillman and Edwards 2016), suggesting promising potential for the ED setting.
We conducted a randomized study investigating five survey administration modes for a patient experience survey measuring the experiences of patients who visited the ED and who were discharged-to-community (DTC), rather than admitted to the hospital. This study evaluated whether any of four alternative survey modes were feasible in the ED setting and obtained higher response rates than a reference two-stage SMM (mailed survey with telephone follow-up), a mode that generally produces the highest response rate for patient experience surveys (Beebe et al. 2005; Elliott et al. 2009). Based on evidence from the ED and other settings, the following four alternative modes were tested: on-site distribution of a paper survey (for patients to complete at home and return by mail); mail notification of a web survey; email notification of a web survey; and three-stage SMM (email notification of a web survey, with mail then telephone follow-up). Real-time electronic-tablet-based survey administration methods were excluded because the instrument length required for a comprehensive evaluation of patient experience was deemed incompatible with administration at discharge. In addition, interviews with ED leaders revealed security, cost, and data safeguarding and cleaning concerns for electronic-tablet-based administration in the ED. In this article, we describe the design and results of this feasibility study in terms of response rates by mode, patient email capture rates (for identifying a representative population), and level of required hospital staff involvement.
The findings here may have implications for other settings with similar features of younger populations or poor contact information such as foster youth and college students.
Methods
Survey Instrument
The survey instrument contained 43 questions focused on patient perspectives on care in eight domains specific to ED DTC patients: getting timely care, communication with doctors and nurses, communication about medicines, communication about new medicines prescribed before discharge, communication about test results, communication about follow-up care, overall ED rating, and willingness to recommend the ED. The survey instrument (Version 3.0) was developed by the Centers for Medicare & Medicaid Services and is publicly available online (CMS, Centers for Medicare & Medicaid Services 2016).
Mode of Administration
This feasibility study employed five survey administration modes (see Table 1). The initial contact attempt was made within 42 days of discharge and the fielding period closed 42 days after initial contact. For the two-stage and three-stage SMMs, mailed surveys included a cover letter, and five telephone attempts were made across multiple times and days of the week. For on-site distribution, a survey packet containing a cover letter,[1] hard copy survey, and business reply envelope with postage-paid delivery was distributed by hospital staff upon discharge. The mailed notification of the web survey cover letter contained a URL address to the online survey and a personal PIN. The email notification of the web survey contained a clickable, personalized link to the online survey. Mailed and emailed links were designed to minimize length, and the online survey was mobile-optimized.
Study Design
Eight hospital-based EDs were recruited from a random sample of U.S. hospitals listed in the 2013 American Hospital Association database with at least 14,000 annual ED visits stratified by annual ED visits[2] and region[3]. To be eligible to participate, hospitals had to (1) collect patient email addresses, (2) report at least a 20% email capture rate among their ED patients, and (3) be willing to participate in all modes. Participating hospitals volunteered to be in the study and did not receive monetary compensation for their participation.
A total of 4,017 eligible discharged patients were sampled from the eight hospital-based EDs, approximately 500 per ED. Patient exclusion criteria paralleled those used for the Hospital Consumer Assessment of Healthcare Providers and Systems Survey (CMS, Centers for Medicare & Medicaid Services 2017). For on-site distribution, hospital staff were asked to exclude patients in real-time who were under 18 years old, discharged anywhere besides home, or visiting the ED for a primary alcohol/drug intoxication; it was infeasible to expect staff to apply other exclusion criteria in real-time.
Patients were randomized to the five modes using a block randomization design across four weeks in February 2016 such that groups of two hospitals were randomized to each week. On-site distribution was administered during an assigned one-week block. Each ED was asked to distribute 100 survey packets to a census of patients until the packets ran out. For each hospital, a random sample of 400 patients across the other three weeks was taken, and patients were randomized to the remaining four modes. This design ensured that the same individual would not be sampled twice for the same visit.
Hospitals provided updated administrative data approximately three months after sampling to identify additional post-sampling ineligibles.[4] Our design provided 80% power to detect differences in response rates of 1.4% to 3.0% between modes.
On-site Distribution Debriefing
Debriefing calls were conducted with each participating hospital to gather qualitative feedback from hospital staff about their experiences with on-site distribution. Hospital staff are uniquely positioned to provide information about hospitals’ willingness and ability to enact different survey distribution approaches and the distribution process to desired patients within the physical ED setting. The debriefing protocol included questions about planned versus actual implementation strategies and encountered barriers.
Statistical Analysis
Response rates were calculated using the American Association for Public Opinion Research Response Rate #3 (AAPOR RR3) whereby the numerator is the number of completes, and the denominator is the sum of the number of completes, partials, refusals, breakoffs, and estimated unknown eligibles. Estimated unknown eligibility was calculated as the total number of unknown eligibles within each mode multiplied by an assumed eligibility rate, which was assumed to be the same and the highest observed value across all modes (94.4%; two-stage SMM). Logistic regression was used on our observed data[5] to compare response rates for each experimental mode to the two-stage SMM, controlling for hospital fixed effects.
Email capture rates were of interest since two of the examined modes used patients’ email addresses for survey administration, one exclusively. A valid email address was considered one for which an undeliverable notification was not received in response to our email. Given evidence that email-only modes may be unrepresentative (Elliott et al. 2013), we used multivariate logistic regression to examine characteristics associated with having a valid email address (age, gender, discharge status[6], and sampling batch[7]) and hospital fixed effects for sampled patients from the two modes that included email.
Results
Analytic Results
Table 2 presents response rates overall and by survey administration mode. The overall response rate among eligible cases was 14.2%. The response rate using three-stage SMM (30.7%) was significantly higher than two-stage SMM (25.3%), whereas all other modes were significantly lower than two-stage SMM (p<0.001 for all comparisons). The response rate using mail notification of web mode was lowest (0.8%), followed by email notification of web mode (4.3%) and on-site distribution (9.6%). Overall, 55 patients completed the survey by web, including all respondents in the email and mail notification of web modes and 16 (6.8%) in the three-stage SMM mode.
For the two modes that used email survey administration (three-stage SMM and email notification of web), Table 3 shows the proportion of sampled patients, valid email capture rates, and the odds of having a valid email address by patient characteristic and hospital. Overall, 30.1% of patients received an email, 59.1% did not have an email address in the hospital’s administrative record, and 10.8% were undeliverable. Valid email capture rates ranged from 4.0 to 48.3% across hospitals. The strongest predictor of having a valid email address was the patient’s hospital. There were also higher capture rates for younger patients (30.9% for 18–24 year olds vs. 17.8% for age 85 and older) and women (33.7% vs. 22.5% for men). Many of the hospital, gender, and age differences in email capture rates persisted in the multivariate model. The adjusted odds of a valid email address were significantly lower for male patients (odds ratio [OR] = 0.65 vs. female patients), and patients 85 years of age or older (OR = 0.37 vs. patients 18–24 years of age).
The low valid email capture rate meant that patients without a valid email address who relied solely on an emailed link to the online survey (70.9%) never had the opportunity to complete the survey in that mode. In the three-stage SMM, 31.2% of patients received an email; the remaining 68.8% did have the opportunity to complete the survey due to the mail and telephone follow-up. Among the patients in the three-stage SMM that received an email, 16 (6.6%) completed the survey by web.
Given that the two-stage and three-stage SMM modes had the highest response rates and that patients with a valid email address differ from those without, we examined representativeness of respondents across these two modes by running a logistic regression predicting response (vs. nonresponse), controlling for patient age, gender, discharge status, sampling batch, hospital fixed effects, and mode interactions with age and gender (results not shown). There were no significant interaction effects (p<0.05), suggesting that respondent pools across the SMM modes with and without an email protocol are similar with respect to age and gender.
Staff Debriefing Results
For the on-site distribution mode, the distribution of survey packets proved challenging because of the multiple paths of patient exit/discharge and the sheer number of staff who discharged patients across all shifts. Most hospitals stored the surveys in a central location, such as the main nurses’ station. Some hospitals divided the packets among individual “mobile stations,” allowing nursing staff to choose where they stored packets. Hospitals that placed the packets in a central location admitted nurses were more likely to forget distribution if the packets were out of sight/reach during discharge. After a full day of survey distribution, another hospital realized they were not distributing surveys to “fast-track” patients discharged from the lobby. Several hospitals tweaked their initially ineffective plans after distribution began by expanding distribution, such as from a single charge nurse to multiple, or adding physicians. Hospital staff noted that survey distribution added an additional burden to already overextended staff. They also acknowledged that some selection bias was involved when distributing surveys; some staff admitted to withholding packets from patients with distant or negative demeanors.
Discussion
The three-stage SMM outperformed all other study modes. Our study suggests that mail/telephone SMM protocols (with or without email components) may obtain the highest response rates in the ED setting. An email component significantly increased the response rate beyond the two-stage approach. One additional benefit, as tested in a previous study, is the potential cost savings by adding an email component (Patrick et al. 2018). Once an online survey has been developed and more patients complete the survey by web, mail and telephone follow-up costs may decrease as fewer patients utilize these more expensive modes. Future work is needed to identify and test alternative strategies to invite patients to an online survey beyond emailing a link (e.g., texting a link); improve the availability of email address information; and identify optimal sequencing and timing of mixed mode protocols and the frequency of reminders.
Extending previous findings in other settings to the ED settings, both survey modes that used web alone obtained the lowest response rates in this study (<5%), indicating that a web-only administration, specifically one relying solely on email availability, for surveying ED populations is not promising. The protocol employed for the email notification of web mode used only two reminders which may have been insufficient to significantly bolster response rates. However, caution is needed as previous work suggests that response rate returns diminish with protocols that use an excessive number of reminders (Cook, Heath, and Thompson 2000; Klofstad, Boulianne, and Basson 2008). Low email address capture rates in hospital administrative data are not surprising but limited the contribution of email modes. Capture rates varied greatly between hospitals, suggesting that there could be bias as to the types of patients that have a valid email address available in administrative data. In addition, not all hospitals met the >20% capture rate participation requirement, suggesting hospitals may not accurately assess their own rates. Informal debriefings with hospitals revealed that collection of email addresses is generally not a required field, standardized, or emphasized during staff training and may be skipped during registration, especially for critical patients who need immediate care. Our findings suggest that ED patients with a valid email address may not be representative of eligible ED patients. Limitations of an email-based web-only administration further highlight the benefit of a protocol that employs multiple modes of contact to ensure all patients have the opportunity to complete the survey.
Finally, on-site distribution obtained a low response rate (<10%) and presented implementation difficulties that could render it infeasible in the ED setting.
While our study was limited to eight hospitals that had to meet specific criteria including willingness to participate voluntarily, the hospitals were geographically diverse and the randomization of modes within hospitals ensure internal validity of comparisons between modes (Elliott et al. 2009).
Given the overall low response rates in this population, the issue of nonresponse bias is important to consider. Previous work in the ED has shown that older patients and female patients are generally more likely to respond (Parast et al. 2019). In our study, we found that when comparing the two best performing modes (two-stage and three-stage SMM), the likelihood of response was similar between the two modes regardless of age or gender. Differential survey nonresponse can mean that respondents are not representative of the full eligible sample. Nonresponse bias can result if these differences are also related to the outcomes being measured. The potential for nonresponse bias should be taken into account when deciding the best protocol approach.
Acknowledgments
The authors thank Laura Giordano and the Health Services Advisory Group team, and Rosa-Elena Garcia and other RAND Survey Research Group staff for their contributions to hospital recruitment and data collection.
Funding
This work was supported by the Centers for Medicare & Medicaid Services, Department of Health and Human Services [HHSM-500-2016-00093G]. The content of this publication neither necessarily reflect the views or policies of the Department of Health and Human Services nor does the mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.
Correspondence may be sent to Megan Mathews at the RAND Corporation, 1776 Main Street, Mailstop: M5N, Santa Monica, CA 90401, via email at: Megan_Mathews@rand.org, or via telephone at: 310-393-0411.
The cover letter also provided a toll-free number that patients could use to call in to ask questions or to complete a survey by telephone with a live interviewer.
Annual ED visits were categorized into three groups: medium: 14,000–24,999; large: 25,000–49,999; extra large: 50,000+.
The four census regions were used: Northeast, South, Midwest, and West.
Many administrative variables used to determine patient eligibility are not available in hospitals’ electronic systems at time of discharge and are later extracted from charts for billing purposes. Linkage was not possible for the on-site distribution mode where identifiable administrative data did not exist.
Estimated unknown eligibility rates were not accounted for in these models.
Most DTC patients are discharged home for self care. However, a small amount (14%) were discharged elsewhere in the community. Those discharged not to home can include patients transferred to a short-term facility/institute or utilizing home health services.
Sample was administered in two two-week batches in the month of February.