Loading [Contrib]/a11y/accessibility-menu.js
Skip to main content
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Blog
  • Subscribe
  • search

RSS Feed

Enter the URL below into your favorite RSS reader.

http://localhost:32151/feed
Articles
Vol. 16, Issue 1, 2023May 18, 2023 EDT

Mail to One or Mail to All? An Experiment (Sub)Sampling Drop Point Units in a Self-Administered Address-Based Sampling Frame Survey

Taylor Lewis, Joseph McMichael, Charlotte Looby,
drop pointsdrop point addressesABS surveyhealth survey
https://doi.org/10.29115/SP-2023-0004
Photo by Joanna Kosinska on Unsplash
Survey Practice
Lewis, Taylor, Joseph McMichael, and Charlotte Looby. 2023. “Mail to One or Mail to All? An Experiment (Sub)Sampling Drop Point Units in a Self-Administered Address-Based Sampling Frame Survey.” Survey Practice 16 (1). https:/​/​doi.org/​10.29115/​SP-2023-0004.
Save article as...▾
Download all (2)
  • Figure 1. Base-Weighted Distributions of Demographic Variables for the Mail to All vs. Mail to One Drop Point Experiment Conditions
    Download
  • Figure 2. Base-Weighted Distributions of Key Health Outcome Variables for the Mail to All vs. Mail to One Drop Point Experiment Conditions
    Download

Sorry, something went wrong. Please try again.

If this problem reoccurs, please contact Scholastica Support

Error message:

undefined

View more stats

Abstract

Practitioners utilizing an address-based sampling frame for a self-administered, mail contact survey must decide on how to handle drop points, which are single delivery points or receptacles that service multiple households. A variety of strategies have been adopted, including sampling all units at the drop point or subsampling just one (or a portion) of them. This paper reports results from an experiment fielded during the 2021 Healthy Chicago Survey aimed at providing insight into whether there are any substantive differences between these approaches. We find that a subsampling strategy in which a single mailing is sent produces a roughly 3 percentage point higher response rate relative to a strategy sending multiple mailings concurrently to the drop point. While base-weighted distributions of gender and age differed enough to be statistically significant, there were no noteworthy differences across other demographics or across the base-weighted distributions of select key health outcomes measured by the survey. Taken together, these results provide some evidence that a “mail to one” drop point strategy is more efficient than a “mail to all” drop point strategy.

1. Background

Self-administered mail contact surveys are becoming increasingly popular given persistently decreasing response rates to telephone surveys. Modern mail contact surveys often utilize an address-based sampling (ABS) frame (American Association for Public Opinion Research 2016; Iannacchione, Staab, and Redden 2003) to cover the study area, which is derived from the United States Postal Service’s (USPS) Computerized Delivery Sequence (CDS) file. With each address on the ABS frame serving as a proxy for a household, mail correspondence can be sent to a random selection of addresses with a paper copy of the questionnaire and/or instructions for how to access the survey instrument via the web, perhaps with additional instructions on who within the household should complete it (Olson, Stange, and Smyth 2014). This sampling and data collection protocol can be adopted for the vast majority of addresses on the CDS that maintain a one-to-one relationship with a household. An address type on the CDS that presents challenges is a drop point (USPS 2017), defined as a single delivery point or receptacle that services multiple households. Drop point addresses have no unique apartment or unit designation within the CDS. All that is known is how many units the drop point includes. Nationwide, roughly 1.5% of addresses are drop points, yet rates can breach the double digits in areas where they are highly concentrated, such as Boston, New York City, and Chicago. An interactive tool for visualizing county-level concentrations of drop points can be found at https://abs.rti.org/atlas/drops/viz.

Numerous strategies have been proposed to handle drop points in self-administered mail contact surveys. These range from accepting the risk of coverage bias and eliminating them from the ABS frame altogether, or at least eliminating those larger than some prespecified threshold (RTI International 2021), merging on partially complete unit information (Kalton, Kali, and Sigman 2014) from supplemental data sources such as the No-Stat file (Shook-Sa et al. 2013), or substituting the sampled drop point with the nearest non-drop point address (Harter, McMichael, and Deng 2022; Lewis, McMichael, and Looby 2023). Amaya (2017) identifies two other options: sampling all units within the selected drop point or selecting a subsample of units within the selected drop point. Amaya goes on to speculate how a potential downside with the first strategy is that occupants seeing more than one of the same correspondence may be more prone to deem it a mass mailing and ignore without opening, whereas a risk associated with the second strategy is that occupants may “pass the buck” to another resident, in essence exhibiting diffusion of responsibility behaviors (Barron and Yechiam 2002), since the mailing was not explicitly addressed to himself or herself. To the best of our knowledge, these hypotheses have never been tested. In an effort to help fill this research gap, an experiment was conducted during the 2021 Healthy Chicago Survey (HCS) whereby a portion of drop points was sent a single survey invitation while the complementary portion was sent 2, 3, or 4 survey invitations, depending on the number of units existing at the drop point. The former we refer to as the “mail to one” strategy, whereas the latter we refer to as the “mail to all” strategy. This paper reports on the results from that experiment.

2. Data and Methods

Data analyzed in this paper were collected during the 2021 administration of the HCS, a survey launched by the Chicago Department of Public Health in 2014 as an annual, dual-frame, random-digit dial (DFRDD) telephone survey of adults in Chicago. The HCS transitioned into a mail contact, self-administered, web/paper data collection mode survey using the “next birthday” method for within-household selection (Olson, Stange, and Smyth 2014) and an ABS frame beginning with the 2020 administration (Unangst et al. 2022). Data from the survey have been used to support the implementation of Healthy Chicago 2.0 (https://www.chicago.gov/city/en/depts/cdph/provdrs/healthychicago.html) and to shape a range of public health interventions and policies to mitigate health inequities.

The 2021 HCS was administered between June 14 and November 30, 2021. The ABS frame developed for the survey consisted of 1,207,642 addresses. Of these, 146,711 (12.1%) were addresses in drop points containing between 2 and 4 units, while the remaining 1,060,931 addresses were not associated with a drop point. To simplify data collection logistics, we excluded 10,871 addresses from drop points containing 5 units or more, which are relatively rare and, as Amaya et al. (2014) points out, are often gated communities, high-rises, trailer parks, or alternative housing arrangements that present additional data collection challenges.

Overall, a sample of 18,488 addresses was selected in the 2021 HCS with the goal of obtaining a minimum of 4,200 completes citywide and at least 35 completes within each of 77 mutually exclusive and exhaustive community areas (i.e., sampling strata) that constitute the study area. Addresses were allocated into one of two sample releases fielded in succession. The first began on July 19—following a small-scale pilot sample release that launched on June 14—and the second began on September 15. Initially, a total of 2,196 addresses from drop points were selected. In the first release, we employed the “mail to all” strategy in which we sent either 2, 3, or 4 survey invitation packets to the drop point, depending on its size. The survey invitation packet sent contained a $2 pre-incentive, a paper copy of the questionnaire, and information regarding how it could be completed via the web. Following the Choice+ methodology discussed in Biemer et al. (2018), a $10 post-incentive was promised for completing the survey by paper, and a $20 post-incentive was promised for completing the survey by web. In the second release, the same survey packet materials and pre-/post-incentive amounts were utilized, but we instead employed a “mail to one” strategy in which a single survey packet was sent to the drop point. Note that for non-drop point addresses, three additional reminder mailings were sent. But since targeted follow-up correspondence is impossible without unique apartment or unit numbers, a single survey packet was all that was sent to the drop point addresses. In all, 1,787 survey packets were sent out as part of the mail to all strategy and 1,403 were sent out as part of the mail to one strategy. To account for the fact that the two strategies were applied on two different samples with differing sampling rates eight weeks apart, all percentages reported in this paper have been calculated using base weights.

3. Results

Table 1 presents the counts of disposition codes and corresponding base-weighted percentages for the two drop point experimental conditions. Interestingly, we find the mail to one strategy garners a higher yield rate than the mail to all strategy (14.5% vs. 11.4%). A comparable gap prevails with response rates. The AAPOR RR3 calculation (AAPOR 2023) for the mail to one strategy comes out to 16.6%, whereas that figure is 13.3% for the mail to all strategy. Overall, this 3.3 percentage point difference is large enough to be statistically significant (t = 2.63; p < 0.01), but the effect is a little weaker in DPs with 2 units versus those with 3 or 4 (2.4 versus 4.2 percentage points, respectively). While counts of partial completes and undeliverables are not large enough to make meaningful comparisons, one can note from the table that there are no discernable differences across the two conditions. Another noteworthy finding is that, under the mail to all condition, 134 of the 206 web and paper completes came from a unique drop point. So, multiple completes from the same drop point account for 35% of the total number of completes. Although sample sizes are relatively small, we have no evidence of this number varying much depending on whether the drop point was comprised of 2, 3, or 4 units.

Table 1.Disposition Codes and Base-Weighted Percentages for the Mail to All vs. Mail to One Drop Point Experiment Conditions.
Mail to All Mail to One
Code Meaning Description Count Base-Weighted
Percent
Count Base-Weighted
Percent
CW Web Complete Answered by web with at least 4 weighting variables 150 8.2 143 10.2
CP Paper Complete Answered by paper with at least 4 weighting variables 56 3.1 50 4.3
PW Web
Partial Complete
Answered by web with at least 1, but fewer than 4, weighting
variables
8 0.5 5 0.3
PP Paper
Partial Complete
Answered by paper with at least 1, but fewer than 4, weighting variables 1 0.1 4 0.4
UD Undeliverable Mail correspondence returned by U.S. Postal Service 38 2.0 27 2.2
RF Known Eligibility
Nonrespondent
Explicit refusal or blank questionnaire returned 0 0.0 0 0.0
NR Unknown Eligibility
Nonrespondent
All other cases not assigned one of the other codes 1,534 86.0 1,174 82.7
Totals 1,787 100.0 1,403 100.0

In addition to data collection performance statistics, we compared across the two drop point experiment conditions the base-weighted distributions of the same demographic and key health outcome variables analyzed in Unangst et al. (2022). These are presented in Figures 1 and 2, respectively. Unangst et al. (2022) presented tabular comparisons of roughly one dozen demographics and key health outcomes, but for brevity, we present six of each in this paper via grouped bar charts. The p-value from a Rao-Scott chi-square test of independence (Rao and Scott 1981) is provided in parentheses underneath each variable title.

Figure 1
Figure 1.Base-Weighted Distributions of Demographic Variables for the Mail to All vs. Mail to One Drop Point Experiment Conditions
Figure 2
Figure 2.Base-Weighted Distributions of Key Health Outcome Variables for the Mail to All vs. Mail to One Drop Point Experiment Conditions

With respect to the demographic variables, the mail to one condition tends to generate more completes from older individuals, women, and those who own their home. On the other hand, no noteworthy patterns emerge for respondent race/ethnicity, educational attainment, and the presence of children in the household. For the key health outcome distributions shown in Figure 2, only one’s self-rating of overall health is marginally significant, with roughly an 8 percentage point difference between those who self-rate themselves as being in excellent, very good, or good health. Distributions on smoking status, whether one has had a medical checkup in the last year, and whether one has ever been diagnosed with hypertension, asthma, or diabetes are very similar across the two drop point experiment conditions.

4. Summary

Practitioners utilizing ABS frames in self-administered, mail contact surveys must decide how to handle drop points. A variety of strategies are used in practice, including the two competing methods discussed in Amaya (2017): (1) sampling all units at the drop point or (2) subsampling a portion of them. This paper aimed to provide insight into whether there were any substantive differences between those two approaches. Specifically, we reported results from an experiment fielded during the 2021 HCS in which drop point addresses were treated in one of two ways across two subsequent sample releases. In the first, a mail to all strategy was employed to effectively sample all units at drop points consisting of 2, 3, or 4 units. In the second, a mail to one strategy was utilized to effectively subsample just a single unit at the drop point.

Our findings can be summarized as follows. The mail to one strategy produced a roughly 3 percentage point increase in response rates. This difference was somewhat more pronounced in three- and four-unit drop point addresses than those consisting of two units, suggesting any potential diffusion of responsibility effect is reduced in latter scenarios, where there is only one other unit/individual upon which to “pass the buck.” Base-weighted distributions of demographics were disparate enough to be statistically significant in some instances—specifically, for respondent gender and age—yet none of the base-weighted distributions of key health outcomes differed significantly. All in all, these results suggest that a mail to one strategy is more efficient than a mail to all strategy.

To be sure, more research is needed to support these findings, especially considering our study’s limitations, which we acknowledge in closing. For one, our study focused on one major metropolitan area of the United States, and with relatively small (analysis) sample sizes. Furthermore, we did not evaluate a “middle ground” condition to subsample more than one but fewer than all units at the drop point. Given that, nationally, 80% of drop points are comprised of 2 units (Amaya 2017), however, that approach would likely not differ much relative to the two conditions we did evaluate. Last, we did not compare these two approaches in a wholistic manner (e.g., with respect to citywide estimates including nondrop point addresses) against other alternatives such as exclusion or substitution, but forthcoming research will do so.


Disclaimer

The conclusions in the paper are those of the authors and do not necessarily represent the views of the Chicago Department of Public Health.

Author Contact Information

Taylor Lewis
701 13th St., NW
Suite 750
Washington, DC 20005
thlewis@rti.org
202-728-1940

Submitted: February 12, 2023 EDT

Accepted: May 04, 2023 EDT

References

Amaya, Ashley. 2017. “RTI International’s Address-Based Sampling Atlas: Drop Points.” RTI Press Publication No. OP-0047-1712. Research Triangle Park, NC: RTI Press. https:/​/​doi.org/​10.3768/​rtipress.2017.op.0047.1712.
Amaya, Ashley, Felicia LeClere, Lee Fiorio, and Ned English. 2014. “Improving the Utility of the DSF Address-Based Frame through Ancillary Information.” Field Methods 26 (1): 70–86. https:/​/​doi.org/​10.1093/​poq/​nft041.
Google Scholar
American Association for Public Opinion Research. 2016. “Task Force on Address-Based Sampling.” https:/​/​aapor.org/​wp-content/​uploads/​2022/​11/​AAPOR_Report_1_7_16_CLEAN-COPY-FINAL-2.pdf.
———. 2023. “Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys.” 10th ed. https:/​/​aapor.org/​wp-content/​uploads/​2023/​04/​Standards-Definitions-10th-edition.pdf.
Barron, Greg, and Eldad Yechiam. 2002. “Private E-Mail Requests and the Diffusion of Responsibility.” Computers in Human Behavior 18 (5): 507–20. https:/​/​doi.org/​10.1016/​s0747-5632(02)00007-9.
Google Scholar
Biemer, Paul, Joe Murphy, Stephanie Zimmer, Chip Berry, Grace Deng, and Katie Lewis. 2018. “Using Bonus Monetary Incentives to Encourage Web Response in Mixed-Mode Household Surveys.” Journal of Survey Statistics and Methodology 6 (2): 240–61. https:/​/​doi.org/​10.1093/​jssam/​smx015.
Google Scholar
Harter, Rachel, Joseph McMichael, and S. Grace Deng. 2022. “New Approach for Handling Drop Point Addresses in Mail/ Web Surveys.” RTI Press Publication No. OP-0074-2209. Research Triangle Park, NC: RTI Press. https:/​/​doi.org/​10.3768/​rtipress.2022.op.0074.2209.
Iannacchione, Vincent, Jennifer Staab, and David Redden. 2003. “Evaluating the Use of Residential Mailing Addresses in a Metropolitan Household Survey.” Public Opinion Quarterly 67 (2): 202–10. https:/​/​doi.org/​10.1086/​374398.
Google Scholar
Kalton, Graham, Jennifer Kali, and Richard Sigman. 2014. “Handling Frame Problems When Address-Based Sampling Is Used for In-Person Household Surveys.” Journal of Survey Statistics and Methodology 2 (3): 283–304. https:/​/​doi.org/​10.1093/​jssam/​smu013.
Google Scholar
Lewis, Taylor, Joseph McMichael, and Charlotte Looby. 2023. “Evaluating Substitution as a Strategy for Handling U.S. Postal Service Drop Points in Self-Administered Address-Based Sampling Frame Surveys.” Sociological Methodology 53 (1): 158–75. https:/​/​doi.org/​10.1177/​00811750221147525.
Google Scholar
Olson, Kristen, Mathew Stange, and Jolene Smyth. 2014. “Assessing Within-Household Selection Methods in Household Mail Surveys.” Public Opinion Quarterly 78 (3): 656–78. https:/​/​doi.org/​10.1093/​poq/​nfu022.
Google Scholar
Rao, Jonathan, and Alastair Scott. 1981. “The Analysis of Categorical Data from Complex Sample Surveys: Chi-Squared Tests for Goodness of Fit and Independence in Two-Way Tables.” Journal of the American Statistical Association 76 (374): 221–30. https:/​/​doi.org/​10.1080/​01621459.1981.10477633.
Google Scholar
RTI International. 2021. “2020 Healthy Chicago Survey (HCS) and 2021 Healthy Chicago Survey (HCS) COVID-19 Social Impact Survey (COVID SIS): Methodology Report.” https:/​/​www.chicago.gov/​content/​dam/​city/​depts/​cdph/​CDPH/​Healthy%20Chicago/​2020_HCS_Methodology_Report_COVID_SIS2_093020201.pdf.
Shook-Sa, Bonnie, Douglas Currivan, Joseph McMichael, and Vincent Iannacchione. 2013. “Extending the Coverage of Address-Based Sampling Frames: Beyond the USPS Computerized Delivery Sequence File.” Public Opinion Quarterly 77 (4): 994–1005. https:/​/​doi.org/​10.1093/​poq/​nft041.
Google Scholar
Unangst, Jennifer, Taylor Lewis, Emily Laflamme, Nik Prachand, and Kingsley Weaver. 2022. “Transitioning the Healthy Chicago Survey from a Telephone Mode to Self-Administered by Mail Mode.” Journal of Public Health Management & Practice 28 (3): 309–16. https:/​/​doi.org/​10.1097/​phh.0000000000001512.
Google Scholar
United States Postal Service. 2017. “CDS User Guide.” https:/​/​postalpro.usps.com/​cds/​User_Guide.

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Powered by Scholastica, the modern academic journal management system