Loading [Contrib]/a11y/accessibility-menu.js
Skip to main content
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Blog
  • Subscribe
  • search

RSS Feed

Enter the URL below into your favorite RSS reader.

http://localhost:47514/feed
Articles
Vol. 12, Issue 1, 2019July 22, 2019 EDT

Methods for Improving Response Rates in an Emergency Department Setting – A Randomized Feasibility Study

Megan Mathews, MA, Layla Parast, PhD, Anagha Tolpadi, MS, Marc Elliott, PhD, Elizabeth Flow-Delwiche, PhD, Kirsten Becker, MS,
nonresponseweb surveyemail protocolsurveysequential mixed modepatient experienceemergency department
https://doi.org/10.29115/SP-2019-0007
Photo by Hush Naidoo on Unsplash
Survey Practice
Mathews, MA, Megan, Layla Parast, PhD, Anagha Tolpadi, MS, Marc Elliott, PhD, Elizabeth Flow-Delwiche, PhD, and Kirsten Becker, MS. 2019. “Methods for Improving Response Rates in an Emergency Department Setting – A Randomized Feasibility Study.” Survey Practice 12 (1). https:/​/​doi.org/​10.29115/​SP-2019-0007.
Save article as...▾

View more stats

Abstract

Standard single survey mode approaches, such as mail-only or telephone-only, generally produce lower response rates than sequential mixed mode (SMM) approaches. SMM approaches may be particularly important when lower response propensities and/or poor contact information exist, as with emergency department (ED) patients. Results were examined from a randomized feasibility study testing five survey administration modes among ED patients who are discharged-to-community. A total of 4,017 discharged patients from eight EDs during February 2016 were randomized in equal numbers to five modes: on-site distribution of a paper survey; mail notification of a web survey; email notification of a web survey; two-stage SMM (mailed survey with telephone follow-up); and three-stage SMM (email notification of a web survey with mail follow-up and then telephone follow-up). Sampled patients received a 43-item instrument focusing on ED patient experience. The two-stage SMM, commonly used for other patient experience surveys, was the reference mode. Three-stage SMM had a higher response rate (30.7%) than two-stage SMM (25.3%); the other three experimental modes had significantly lower response rates (0.8–9.6%; p<0.001 for all comparisons). On-site distribution was difficult to operationalize, possibly promoting selection bias. Valid email capture rates ranged from 4.0% to 48.3% by hospital and 30.1% overall, even though participating hospitals reported capture rates of at least 20%; the odds of having a valid email address were significantly lower for older and male patients. Findings highlight the limitations of web-only and on-site approaches, and identify SMM with a web-based component as a promising approach for survey administration in the ED setting.

Introduction

Survey response rates are generally declining, particularly for surveys of young adults (Sheldon et al. 2007) and in settings with poor contact information (Brick and Williams 2012). The emergency department (ED) setting has both features, making this patient population particularly challenging to survey.

One recent survey of ED patients in 50 hospitals obtained low response rates (14%–22%) for single-mode mail-only and telephone-only administration but a higher response rate (29%) for a two-stage sequential mixed mode (SMM) of a mailed survey with telephone follow-up (Parast et al. 2019). Other work has tested nontraditional modes in the ED setting, including on-site electronic-tablet-based survey administration to assess patient experience (The Beryl Institute 2012). While promising, this type of real-time administration might only be feasible for short surveys due to time constraints, including the desire of patients to quickly leave the ED (Tuality Healthcare 2013).

Another alternative method for surveying ED patients is through an online survey. Online surveys have been known to reach younger populations (a low-response-rate population) than mail-only administration (Kaplowitz, Hadlock, and Levine 2004). Beyond the ED setting, efforts to increase survey response rates and decrease costs have focused largely on SMM approaches that incorporate an online survey, such as an online survey with follow-up by telephone (de Leeuw, Edith Desiree and Walt de Heer 2002; Groves 2004) or a mailed survey (McMaster et al. 2017). SMM research in a variety of settings has found good response rates and representativeness (de Leeuw, Edith Desiree 2005; Dillman and Edwards 2016), suggesting promising potential for the ED setting.

We conducted a randomized study investigating five survey administration modes for a patient experience survey measuring the experiences of patients who visited the ED and who were discharged-to-community (DTC), rather than admitted to the hospital. This study evaluated whether any of four alternative survey modes were feasible in the ED setting and obtained higher response rates than a reference two-stage SMM (mailed survey with telephone follow-up), a mode that generally produces the highest response rate for patient experience surveys (Beebe et al. 2005; Elliott et al. 2009). Based on evidence from the ED and other settings, the following four alternative modes were tested: on-site distribution of a paper survey (for patients to complete at home and return by mail); mail notification of a web survey; email notification of a web survey; and three-stage SMM (email notification of a web survey, with mail then telephone follow-up). Real-time electronic-tablet-based survey administration methods were excluded because the instrument length required for a comprehensive evaluation of patient experience was deemed incompatible with administration at discharge. In addition, interviews with ED leaders revealed security, cost, and data safeguarding and cleaning concerns for electronic-tablet-based administration in the ED. In this article, we describe the design and results of this feasibility study in terms of response rates by mode, patient email capture rates (for identifying a representative population), and level of required hospital staff involvement.

The findings here may have implications for other settings with similar features of younger populations or poor contact information such as foster youth and college students.

Methods

Survey Instrument

The survey instrument contained 43 questions focused on patient perspectives on care in eight domains specific to ED DTC patients: getting timely care, communication with doctors and nurses, communication about medicines, communication about new medicines prescribed before discharge, communication about test results, communication about follow-up care, overall ED rating, and willingness to recommend the ED. The survey instrument (Version 3.0) was developed by the Centers for Medicare & Medicaid Services and is publicly available online (CMS, Centers for Medicare & Medicaid Services 2016).

Mode of Administration

This feasibility study employed five survey administration modes (see Table 1). The initial contact attempt was made within 42 days of discharge and the fielding period closed 42 days after initial contact. For the two-stage and three-stage SMMs, mailed surveys included a cover letter, and five telephone attempts were made across multiple times and days of the week. For on-site distribution, a survey packet containing a cover letter,[1] hard copy survey, and business reply envelope with postage-paid delivery was distributed by hospital staff upon discharge. The mailed notification of the web survey cover letter contained a URL address to the online survey and a personal PIN. The email notification of the web survey contained a clickable, personalized link to the online survey. Mailed and emailed links were designed to minimize length, and the online survey was mobile-optimized.

Table 1.Planned survey mode protocol timeline
Day from first contact attempt Two-stage SMM Three-stage SMM On-site distribution Mail notification of web survey Email notification of web survey
0 Mailed survey Emailed link to survey Distributed survey Mailed URL and PIN Emailed link
5 -- Mailed survey -- -- Second emailed link
10 -- -- -- -- Third emailed link
21 Phone outreach started+ -- -- Second mailed URL and PIN --
26 -- Phone outreach started+ -- -- --

+The start of phone outreach was approximately 21 days after mailed invitations were sent out.
Note: The total fielding period lasted 42 days from initial contact.

Study Design

Eight hospital-based EDs were recruited from a random sample of U.S. hospitals listed in the 2013 American Hospital Association database with at least 14,000 annual ED visits stratified by annual ED visits[2] and region[3]. To be eligible to participate, hospitals had to (1) collect patient email addresses, (2) report at least a 20% email capture rate among their ED patients, and (3) be willing to participate in all modes. Participating hospitals volunteered to be in the study and did not receive monetary compensation for their participation.

A total of 4,017 eligible discharged patients were sampled from the eight hospital-based EDs, approximately 500 per ED. Patient exclusion criteria paralleled those used for the Hospital Consumer Assessment of Healthcare Providers and Systems Survey (CMS, Centers for Medicare & Medicaid Services 2017). For on-site distribution, hospital staff were asked to exclude patients in real-time who were under 18 years old, discharged anywhere besides home, or visiting the ED for a primary alcohol/drug intoxication; it was infeasible to expect staff to apply other exclusion criteria in real-time.

Patients were randomized to the five modes using a block randomization design across four weeks in February 2016 such that groups of two hospitals were randomized to each week. On-site distribution was administered during an assigned one-week block. Each ED was asked to distribute 100 survey packets to a census of patients until the packets ran out. For each hospital, a random sample of 400 patients across the other three weeks was taken, and patients were randomized to the remaining four modes. This design ensured that the same individual would not be sampled twice for the same visit.

Hospitals provided updated administrative data approximately three months after sampling to identify additional post-sampling ineligibles.[4] Our design provided 80% power to detect differences in response rates of 1.4% to 3.0% between modes.

On-site Distribution Debriefing

Debriefing calls were conducted with each participating hospital to gather qualitative feedback from hospital staff about their experiences with on-site distribution. Hospital staff are uniquely positioned to provide information about hospitals’ willingness and ability to enact different survey distribution approaches and the distribution process to desired patients within the physical ED setting. The debriefing protocol included questions about planned versus actual implementation strategies and encountered barriers.

Statistical Analysis

Response rates were calculated using the American Association for Public Opinion Research Response Rate #3 (AAPOR RR3) whereby the numerator is the number of completes, and the denominator is the sum of the number of completes, partials, refusals, breakoffs, and estimated unknown eligibles. Estimated unknown eligibility was calculated as the total number of unknown eligibles within each mode multiplied by an assumed eligibility rate, which was assumed to be the same and the highest observed value across all modes (94.4%; two-stage SMM). Logistic regression was used on our observed data[5] to compare response rates for each experimental mode to the two-stage SMM, controlling for hospital fixed effects.

Email capture rates were of interest since two of the examined modes used patients’ email addresses for survey administration, one exclusively. A valid email address was considered one for which an undeliverable notification was not received in response to our email. Given evidence that email-only modes may be unrepresentative (Elliott et al. 2013), we used multivariate logistic regression to examine characteristics associated with having a valid email address (age, gender, discharge status[6], and sampling batch[7]) and hospital fixed effects for sampled patients from the two modes that included email.

Results

Analytic Results

Table 2 presents response rates overall and by survey administration mode. The overall response rate among eligible cases was 14.2%. The response rate using three-stage SMM (30.7%) was significantly higher than two-stage SMM (25.3%), whereas all other modes were significantly lower than two-stage SMM (p<0.001 for all comparisons). The response rate using mail notification of web mode was lowest (0.8%), followed by email notification of web mode (4.3%) and on-site distribution (9.6%). Overall, 55 patients completed the survey by web, including all respondents in the email and mail notification of web modes and 16 (6.8%) in the three-stage SMM mode.

Table 2.Patient response rates, overall and by survey mode
Two-stage SMM Three-stage SMM On-site distribution Mail notification of web mode Email notification of web mode Overall
N sampled 805 804 800 804 804 4,017
N post-sampling ineligible (% of sampled by mode) 34 (4.2%) 25 (3.1%) 17 (2.1%)+ 11 (1.4%) 3 (0.4%) 90 (2.2%)
N respondents 195 237 73 6 33 544
Response rate among eligible++ 25.3%[ref] 30.7%*** 9.6%*** 0.8%*** 4.3%*** 14.2%

Significance levels: *0.05>p-value³0.01; ** 0.01>p-value³0.001; *** 0.001>p-value.
+One hospital only passed out 83 out of 100 sampling packets during their designated sampling week block.
++ AAPOR RR3 = completes / [completes + partials + refusals + breakoffs + e(other unknowns)]
ref = reference/comparison mode

For the two modes that used email survey administration (three-stage SMM and email notification of web), Table 3 shows the proportion of sampled patients, valid email capture rates, and the odds of having a valid email address by patient characteristic and hospital. Overall, 30.1% of patients received an email, 59.1% did not have an email address in the hospital’s administrative record, and 10.8% were undeliverable. Valid email capture rates ranged from 4.0 to 48.3% across hospitals. The strongest predictor of having a valid email address was the patient’s hospital. There were also higher capture rates for younger patients (30.9% for 18–24 year olds vs. 17.8% for age 85 and older) and women (33.7% vs. 22.5% for men). Many of the hospital, gender, and age differences in email capture rates persisted in the multivariate model. The adjusted odds of a valid email address were significantly lower for male patients (odds ratio [OR] = 0.65 vs. female patients), and patients 85 years of age or older (OR = 0.37 vs. patients 18–24 years of age).

Table 3.Characteristics associated with having a valid email address
Characteristica % of eligible sample
(n=1,580)
% with valid email address (n) Adjusted odds ratio (95% confidence interval)b
Intercept -- -- 0.42 ( 0.12 , 1.49 )
Age
18–24 [ref] 14.10% 30.9% (69) 1.00
25–34 20.30% 34.6% (111) 1.08 ( 0.73 , 1.58 )
35–44 16.10% 31.5% (80) 1.02 ( 0.68 , 1.54 )
45–54 13.70% 29.2% (63) 0.92 ( 0.6 , 1.41 )
55–64 13.30% 30.5% (64) 0.92 ( 0.6 , 1.41 )
65–74 10.80% 28.8% (49) 0.92 ( 0.57 , 1.48 )
75–84 7.20% 23.9% (27) 0.64 ( 0.37 , 1.1 )
85 or older 4.60% 17.8% (13) 0.37 ( 0.18 , 0.76 ) **
Gender
Female [ref] 61.40% 33.7% (327) 1.00
Male 38.50% 24.5% (149) 0.65 ( 0.51 , 0.83 ) ***
Discharge status
Discharged not to home [ref] 13.90% 22.4% (49) 1.00
Discharged to homec 86.10% 31.4% (427) 1.47 ( 0.43 , 5.01 )
Sampling batchd
1 42.20% 35.1% (234) 1.26 ( 1 , 1.6 )
2 [ref] 57.80% 26.5% (242) 1.00
Hospital
A 12.50% 29.3% (58) 0.85 ( 0.55 , 1.31 )
B 12.70% 4.0% (8) 0.08 ( 0.04 , 0.17 ) ***
C 12.50% 20.2% (40) 0.5 ( 0.31 , 0.78 ) **
D 12.30% 23.2% (45) 0.81 ( 0.22 , 2.95 )
E 12.70% 48.3% (97) 1.78 ( 1.19 , 2.68 ) **
F 12.50% 41.9% (83) 1.32 ( 0.88 , 1.98 )
G 12.10% 37.7% (72) 1.09 ( 0.72 , 1.65 )
H [ref] 12.70% 36.5% (73) 1.00

Note: Only data from the three-stage SMM and email distribution of web modes only were used in this analysis. n is the denominator of the column population. Significance levels: *0.05>p-value³0.01; ** 0.01>p-value³0.001; *** 0.001>p-value.
a Missing values were imputed for discharge status (12.44%) and gender (0.06%) as the mean value within each hospital. No other predictors had missing values.
b Max-rescaled R-square = 0.157
c Discharged not to home can include patients transferred to a short-term facility/institute or utilizing home health services.
d Sample was administered in two two-week batches in the month of February.

The low valid email capture rate meant that patients without a valid email address who relied solely on an emailed link to the online survey (70.9%) never had the opportunity to complete the survey in that mode. In the three-stage SMM, 31.2% of patients received an email; the remaining 68.8% did have the opportunity to complete the survey due to the mail and telephone follow-up. Among the patients in the three-stage SMM that received an email, 16 (6.6%) completed the survey by web.

Given that the two-stage and three-stage SMM modes had the highest response rates and that patients with a valid email address differ from those without, we examined representativeness of respondents across these two modes by running a logistic regression predicting response (vs. nonresponse), controlling for patient age, gender, discharge status, sampling batch, hospital fixed effects, and mode interactions with age and gender (results not shown). There were no significant interaction effects (p<0.05), suggesting that respondent pools across the SMM modes with and without an email protocol are similar with respect to age and gender.

Staff Debriefing Results

For the on-site distribution mode, the distribution of survey packets proved challenging because of the multiple paths of patient exit/discharge and the sheer number of staff who discharged patients across all shifts. Most hospitals stored the surveys in a central location, such as the main nurses’ station. Some hospitals divided the packets among individual “mobile stations,” allowing nursing staff to choose where they stored packets. Hospitals that placed the packets in a central location admitted nurses were more likely to forget distribution if the packets were out of sight/reach during discharge. After a full day of survey distribution, another hospital realized they were not distributing surveys to “fast-track” patients discharged from the lobby. Several hospitals tweaked their initially ineffective plans after distribution began by expanding distribution, such as from a single charge nurse to multiple, or adding physicians. Hospital staff noted that survey distribution added an additional burden to already overextended staff. They also acknowledged that some selection bias was involved when distributing surveys; some staff admitted to withholding packets from patients with distant or negative demeanors.

Discussion

The three-stage SMM outperformed all other study modes. Our study suggests that mail/telephone SMM protocols (with or without email components) may obtain the highest response rates in the ED setting. An email component significantly increased the response rate beyond the two-stage approach. One additional benefit, as tested in a previous study, is the potential cost savings by adding an email component (Patrick et al. 2018). Once an online survey has been developed and more patients complete the survey by web, mail and telephone follow-up costs may decrease as fewer patients utilize these more expensive modes. Future work is needed to identify and test alternative strategies to invite patients to an online survey beyond emailing a link (e.g., texting a link); improve the availability of email address information; and identify optimal sequencing and timing of mixed mode protocols and the frequency of reminders.

Extending previous findings in other settings to the ED settings, both survey modes that used web alone obtained the lowest response rates in this study (<5%), indicating that a web-only administration, specifically one relying solely on email availability, for surveying ED populations is not promising. The protocol employed for the email notification of web mode used only two reminders which may have been insufficient to significantly bolster response rates. However, caution is needed as previous work suggests that response rate returns diminish with protocols that use an excessive number of reminders (Cook, Heath, and Thompson 2000; Klofstad, Boulianne, and Basson 2008). Low email address capture rates in hospital administrative data are not surprising but limited the contribution of email modes. Capture rates varied greatly between hospitals, suggesting that there could be bias as to the types of patients that have a valid email address available in administrative data. In addition, not all hospitals met the >20% capture rate participation requirement, suggesting hospitals may not accurately assess their own rates. Informal debriefings with hospitals revealed that collection of email addresses is generally not a required field, standardized, or emphasized during staff training and may be skipped during registration, especially for critical patients who need immediate care. Our findings suggest that ED patients with a valid email address may not be representative of eligible ED patients. Limitations of an email-based web-only administration further highlight the benefit of a protocol that employs multiple modes of contact to ensure all patients have the opportunity to complete the survey.

Finally, on-site distribution obtained a low response rate (<10%) and presented implementation difficulties that could render it infeasible in the ED setting.

While our study was limited to eight hospitals that had to meet specific criteria including willingness to participate voluntarily, the hospitals were geographically diverse and the randomization of modes within hospitals ensure internal validity of comparisons between modes (Elliott et al. 2009).

Given the overall low response rates in this population, the issue of nonresponse bias is important to consider. Previous work in the ED has shown that older patients and female patients are generally more likely to respond (Parast et al. 2019). In our study, we found that when comparing the two best performing modes (two-stage and three-stage SMM), the likelihood of response was similar between the two modes regardless of age or gender. Differential survey nonresponse can mean that respondents are not representative of the full eligible sample. Nonresponse bias can result if these differences are also related to the outcomes being measured. The potential for nonresponse bias should be taken into account when deciding the best protocol approach.

Acknowledgments

The authors thank Laura Giordano and the Health Services Advisory Group team, and Rosa-Elena Garcia and other RAND Survey Research Group staff for their contributions to hospital recruitment and data collection.

Funding

This work was supported by the Centers for Medicare & Medicaid Services, Department of Health and Human Services [HHSM-500-2016-00093G]. The content of this publication neither necessarily reflect the views or policies of the Department of Health and Human Services nor does the mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.

Correspondence may be sent to Megan Mathews at the RAND Corporation, 1776 Main Street, Mailstop: M5N, Santa Monica, CA 90401, via email at: Megan_Mathews@rand.org, or via telephone at: 310-393-0411.


  1. The cover letter also provided a toll-free number that patients could use to call in to ask questions or to complete a survey by telephone with a live interviewer.

  2. Annual ED visits were categorized into three groups: medium: 14,000–24,999; large: 25,000–49,999; extra large: 50,000+.

  3. The four census regions were used: Northeast, South, Midwest, and West.

  4. Many administrative variables used to determine patient eligibility are not available in hospitals’ electronic systems at time of discharge and are later extracted from charts for billing purposes. Linkage was not possible for the on-site distribution mode where identifiable administrative data did not exist.

  5. Estimated unknown eligibility rates were not accounted for in these models.

  6. Most DTC patients are discharged home for self care. However, a small amount (14%) were discharged elsewhere in the community. Those discharged not to home can include patients transferred to a short-term facility/institute or utilizing home health services.

  7. Sample was administered in two two-week batches in the month of February.

References

Beebe, Timothy, Michael Davern, Donna McAlpine, Kathleen Thiede Call, and Todd Rockwood. 2005. “Increasing Response Rates in a Survey of Medicaid Enrollees: The Effect of a Prepaid Monetary Incentive and Mixed Modes (Mail and Telephone.” Med Care 43 (4): 411–15.
Google Scholar
Brick, J. Michael, and Douglas Williams. 2012. “Explaining Rising Nonresponse Rates in Cross-Sectional Surveys.” Edited by Douglas S. Massey and Roger Tourangeau. The ANNALS of the American Academy of Political and Social Science 645 (1): 36–59. https:/​/​doi.org/​10.1177/​0002716212456834.
Google Scholar
CMS, Centers for Medicare & Medicaid Services. 2016. “Emergency Department Patient Experiences with Care (EDPEC) Survey.” https:/​/​www.cms.gov/​Research-Statistics-Data-and-Systems/​Research/​CAHPS/​ed.html.
———. 2017. “CAHPS Hospital Survey (HCAHPS) Quality Assurance Guidelines.” In Quality Assurance Guidelines. http:/​/​hcahpsonline.org.
Google Scholar
Cook, Colleen, Fred Heath, and Russel L. Thompson. 2000. “A Meta-Analysis of Response Rates in Web-or Internet-Based Surveys.” Educational and Psychological Measurement 60 (6): 821–36.
Google Scholar
de Leeuw, Edith Desiree. 2005. “To Mix or Not to Mix Data Collection Modes in Surveys.” Journal of Official Statistics 21 (2): 233–55.
Google Scholar
de Leeuw, Edith Desiree and Walt de Heer. 2002. “Trends in Household Survey Nonresponse: A Longitudinal and International Comparison.” In Survey Nonresponse, edited by Robert Groves, Don A. Dillman, J.L. Eltinge, and R.J. Little, 41–54. New York: John Wiley & Sons, Inc.
Google Scholar
Dillman, D., and M. Edwards. 2016. “Designing a Mixed Mode Survey.” In The SAGE Handbook of Survey Methodology, edited by C. Wolf, D. Joye, T. Smith, and Y.C. Fu. London: Sage.
Google Scholar
Elliott, Marc N., Julie A. Brown, William G. Lehrman, Megan K. Beckett, Katrin Hambarsoomian, Laura A. Giordano, and Elizabeth H. Goldstein. 2013. “A Randomized Experiment Investigating the Suitability of Speech-Enabled IVR and Web Modes for Publicly Reported Surveys of Patients’ Experience of Hospital Care.” Medical Care Research and Review 70 (2): 165–84.
Google Scholar
Elliott, Marc N., Alan M. Zaslavsky, Elizabeth Goldstein, William Lehrman, Katrin Hambarsoomians, Megan K. Beckett, and Laura Giordano. 2009. “Effects of Survey Mode, Patient Mix, and Nonresponse on CAHPS®Hospital Survey Scores.” Health Services Research 44 (2p1): 501–18. https:/​/​doi.org/​10.1111/​j.1475-6773.2008.00914.x.
Google ScholarPubMed CentralPubMed
Groves, Robert M. 2004. Survey Errors and Survey Costs. Vol. 536. John Wiley & Sons.
Google Scholar
Kaplowitz, M. D., T. D. Hadlock, and R. Levine. 2004. “A Comparison of Web and Mail Survey Response Rates.” Public Opinion Quarterly 68 (1): 94–101. https:/​/​doi.org/​10.1093/​poq/​nfh006.
Google Scholar
Klofstad, Casey A., Shelley Boulianne, and Danna Basson. 2008. “Matching the Message to the Medium: Results from an Experiment on Internet Survey Email Contacts.” Social Science Computer Review 26 (4): 498–509.
Google Scholar
McMaster, Hope Seib, Cynthia A. LeardMann, Steven Speigle, and Don A. Dillman. 2017. “An Experimental Comparison of Web-Push vs. Paper-Only Survey Procedures for Conducting an in-Depth Health Survey of Military Spouses.” BMC Medical Research Methodology 17 (1): 73.
Google Scholar
Parast, Layla, Megan Mathews, Anagha Tolpadi, Marc Elliott, Elizabeth Flow-Delwiche, and Kirsten Becker. 2019. “National Testing of the Emergency Department Patient Experience of Care Discharged to Community Survey and Implications for Adjustment in Scoring.” Medical Care 57 (1): 42–48. https:/​/​doi.org/​10.1097/​mlr.0000000000001005.
Google Scholar
Patrick, Megan E., Mick P. Couper, Virginia B. Laetz, John E. Schulenberg, Patrick M. O’Malley, Lloyd D. Johnston, and Richard A. Miech. 2018. “A Sequential Mixed-Mode Experiment in the US National Monitoring the Future Study.” Journal of Survey Statistics and Methodology 6 (1): 72–97.
Google Scholar
Sheldon, Helen, Chris Graham, Nick Pothecary, and Farhat Rasul. 2007. Increasing Response Rates Amongst Black and Minority Ethnic and Seldom Heard Groups. Oxford, UK: Picker Institute Europe.
Google Scholar
The Beryl Institute. 2012. “Patient Experience Case Study - Buffalo Hospital .” http:/​/​www.theberylinstitute.org/​?CASE012012.
Tuality Healthcare. 2013. “Healthcare Gets Real-Time Feedback in Emergency Department.” https:/​/​www.tuality.org/​2013/​06/​tuality_healthcare_gets_real_time_feedback_in_emergency_department/​.

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Powered by Scholastica, the modern academic journal management system