Loading [Contrib]/a11y/accessibility-menu.js
Skip to main content
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Blog
  • Subscribe
  • Author terms & conditions
  • search
  • X (formerly Twitter) (opens in a new tab)
  • RSS feed (opens a modal with a link to feed)

RSS Feed

Enter the URL below into your favorite RSS reader.

https://www.surveypractice.org/feed
Articles
October 08, 2025 EDT

Handling Drop Points in Surveys

Ned English, Stas Kolenikov, Katie Johnson, Katie Archambeau, Martha McRoy,
drop pointsABS sampleaddress-based sampledrop unitsUSPS CDS
https://doi.org/10.29115/SP-2025-0014
Photo by sue hughes on Unsplash
Survey Practice
English, Ned, Stas Kolenikov, Katie Johnson, Katie Archambeau, and Martha McRoy. 2025. “Handling Drop Points in Surveys.” Survey Practice 18 (October). https:/​/​doi.org/​10.29115/​SP-2025-0014.

View more stats

Abstract

Survey designers need to consider what categories of addresses to include as part of address frames, often defined by “delivery type,” e.g., residential, business, PO BOX, and others. Drop points can be a particularly challenging category because they are non-differentiated multi-unit addresses. Specifying a selected housing unit from the computerized delivery sequence file (CDS) is thus not directly possible, challenging common processes associated with address-based sample (ABS) designs, including vendor matching and demographic appends in addition to basic data collection. Our paper first reviews solutions that have been considered in the research industry to include drop points in multi-mode ABS designs over the past decade. We then demonstrate a variety of imputation methods utilized at NORC to assist in assigning unit labels to drop point addresses. We conclude by describing how each imputation method has coverage and representation advantages over previous approaches.

Introduction

“Drop points” are units in multi-unit buildings where the United States Postal Service (USPS) provides delivery to the entry point only, with limited to no information regarding the unit numbers beyond (B. Shook-Sa et al. 2013).

Common examples of drop points include:

  1. Large multi-unit apartment buildings with up to hundreds of apartments, where the mail is sorted by the apartment complex management or building superintendent.

  2. Small apartment buildings including converted single-family homes with relatively few units. In such instances the owner may reside on-site and have a regular delivery address within the building. These buildings may also have unique unit types such as “Basement” or “Front/Rear”. These small drop points account for nearly 95% of all drop points (A. E. Amaya 2017).

  3. Off-campus college housing.

  4. Mobile home parks in rural areas, with designations like “Lot” and “Trailer” commonly used.

While drop points only represent a small share of housing units nationally (less than 2%), they are concentrated in specific cities and metropolitan areas and thus require special handling in surveys that concentrate or oversample such geographies. Drop points represent a methodological challenge for both address-based sample (ABS) and areal in-person surveys. Most ABS designs have multiple mailed contacts including web-push letters and self-administered questionnaires, some with incentives (Dillman, Smyth, and Christian 2014). Survey organizations need to ensure that such mailings are all sent to and received consistently by the selected housing unit, maintaining the integrity of the original probabilistic design.

Existent Literature

Survey organizations have taken interest in determining best practices for sampling and interviewing the drop point population during the past decade (Kalton, Kali, and Sigman 2014). Field observation studies have shown that most units identified as “drop points” on the USPS Computerized Delivery Sequence File (CDS or CDSF) are indeed lacking visible unit identification on the units themselves (A. Amaya et al. 2014). Although unit numbers may not exist in reality, McPhee and Dutwin (2019) found that appending even an artificial unit number increased response rates. The authors suggested imputing unit information from sources such as the USPS NoStat file[1], information from outside vendors, and city-specific real estate databases where available. The NoStat file does provide unit numbering information for a small subset of drop points on the regular CDS (B. Shook-Sa et al. 2013; B. E. Shook-Sa 2014).

Direct substitution has been used in surveys such as the 2020 Residential Energy Consumption Survey (RECS), 2020 New York State Problem Gambling Prevalence Survey, and the Healthy Chicago Survey (Harter, McMichael, and Deng 2022). If a drop point were selected as part of the sample, it would be replaced with a non-drop point unit from a nearby building with the same unit count. The substitution approach is contingent on an implicit assumption that survey responses will be similar between drop point unit occupants and non-drop point unit occupants. A. Amaya et al. (2014) found that buildings with drop point units are often structurally similar to neighboring buildings not flagged as containing drop points. Dekker et al. (2012), however, observed that drop point units were found in older buildings and their occupants were more likely to be low income and ethnic minorities. Further, Lewis, McMichael, and Looby (2023) concluded that occupants of drop point units tended to be older and less likely to be employed for wages than their neighbors. They also observed that occupants in drop point units were more likely to own their own home, which may be due to the design of the study in question and the potential of mail missing unit designation being opened by the owner. In any case, the literature is thus far inconclusive on the best approach for handling drop points.

Prevalence

Drop point units represent approximately 1.3% of residential addresses according to the 01/2024 version of the CDSF[2]. While relatively inconsequential nationally, they are found in greater numbers in specific cities in the Northeast and Midwestern regions.

The New York metropolitan area has the greatest number and share of residential drop point units in the U.S., accounting for one-third of the national total (33.6%). The New York counties of Kings (Brooklyn), Queens, and Richmond (Staten Island) are among those with the largest share of drop point units among residential addresses (22.0%, 26.5%, and 25.9%, respectively). Other areas of high residential drop points concentration include Cook County (Chicago) and Suffolk County (Boston), representing 7.8% and 7.9% of the residential addresses in the CDS for these cities, respectively. Excluding drop points in these areas could bias survey results due to undercoverage and underrepresentation (Table 1).

Table 1.Drop Point Prevalence in Major Metropolitan Areas
Area Total Drop Point Units Percent Drop Points of County Residential Addresses Percent of all U.S. Drop Point Units
New York City 609,088 17.1% 33.6%
Brooklyn (Kings County) 236,693 22.0% 13.1%
Queens County 229,500 26.5% 12.7%
Staten Island (Richmond County) 45,639 25.9% 2.5%
Bronx County 36,296 6.8% 2.0%
Manhattan (New York County) 60,980 6.6% 3.4%
Chicago (Cook County) 174,962 7.8% 9.7%
Boston (Suffolk County) 28,041 7.9% 1.5%

Drop points are also encountered across the United States in college towns and rural areas. In fact, the highest concentration of drop points in the U.S. is found in Athens County, Ohio, a college town home to Ohio University, where 33.9% of the residential addresses are drop point units. Athens County, OH is followed by Liberty County, Montana, a rural north-central county with a population of under 2,000 people, and 30.9% addresses that are drop point units. While drop point considerations usually focus on major metropolitan areas because of the volume, surveys administered in college towns or rural areas need to consider them, as well.

Potential Solutions

In summarizing the literature, there are several potential solutions to handling drop points in address frames (A. E. Amaya 2017).

Exclude from frame: the survey organization excludes drop points from the universe, resulting in a coverage reduction. Selection probabilities and sampling weights are then appropriately adjusted.

Substitute: drop points are retained on the frame, but if any are sampled, the survey organization substitutes the drop point unit for a nearby regular, non-drop point unit (Harter, McMichael, and Deng 2022; Lewis, McMichael, and Looby 2023). Selection probabilities are based on the originally-selected housing unit.

Small-only: the survey organization only takes samples of drop point units in small buildings, for example, buildings with four or fewer units. Smaller buildings would be expected to have more regular unit numbering and are more prevalent than larger buildings. Drop points in larger buildings would be excluded from the frame as per the “Exclude from Universe” option.

Mail-all: the survey organization mails as many survey invites as there are drop point units at the building address, without specifying any unit information. It is up to the residents, or mail carrier, to distribute the surveys. Completed surveys are considered a random subsample.

Mail-one: the survey organization mails one survey to the building, without specifying any unit information. Whichever unit receives or responds to the mailing may be haphazard.

NoStat file: The NoStat file is a supplemental file to the CDS that provides unit number information for a limited subset of drop point units (B. Shook-Sa et al. 2013; B. E. Shook-Sa 2014). This approach provides drop point unit numbers as available from the NoStat file.

Impute sequentially: One relatively simple solution gives each unit a successive number from 1 to the total number of known points at the address. This approach could work for both the ABS applications where the unit information can be designated as “UNIT #”, and with in-person applications where field interviewers can be instructed to verify the unit count, visit the sampled units from the consecutive numbering, and make necessary edits. This approach is particularly appealing for in-person surveys where the field interviewers will be paying face-to-face visits to the sampled addresses and can update the sampled address as needed. We found that USPS marks addresses with unknown count of drop points as 999, so caution should be used with that number.

Impute based on schemas: The survey organization uses probabilistic modeling to impute missing unit information based on complete unit information from addresses with a similar building size and geography. Complete unit information is classified into address schemas, such as having units identified by a single letter (A, B, C, etc.) or with numbers followed by a direction (1N, 1S, 2N, 2S). The probabilities of address schemas are then used to impute drop point unit information. We found that schemas differ substantially between the major cities we analyzed (New York, Chicago and Boston). An example of the drop point designations for Chicago is given in Table 2. The schema of floor-number-direction is only found in this city because most streets in Chicago are on the grid. This schema would make no sense for the older city of Boston that features a cobweb of smaller streets traversing the city in random directions. We developed such model-based imputation for mailing outreach as we expect that by providing complete unit information, rather than an ordered number, mailings will be more likely to repeatedly reach the intended unit especially when the actual numbering schema is more complex. For example, it would be unreasonable to expect persons sorting the mail in the building to correctly link the sequentially numbered “UNIT 7” to unit “APT 4A” in a particular building a priori, exposing the mailing to misdelivery.

Table 2.Address-Level Schemas for Non-Drop Point Addresses in Chicago, IL
Address Schema Example Non-Drop Point Addresses (n) Non-Drop Point Addresses (%)
Number Only Unit 1, 2, 3, 4 65,886 54.2
Floor Number with Direction 1N, 1S, 2N, 2S 11,967 9.8
Number Only Units and Letter Only Units 1, 2, A, B 6,482 5.3
Basement Unit followed by Number Only Unit BSMT, 1, 2, 3 5,709 4.7
Floor Number and Two-Digit Unit Number 101, 102, 103, 104 4,853 4.0

Imputing unit information based on modeling allows researchers to derive complete unit designation based on unit information from addresses in the surrounding geographic area, building size and geometry (number of floors). The first step to imputation is to classify surrounding multi-unit addresses into schemas that can be applied to drop point addresses. Then, our imputation process considers all unit-level schemas at an address to determine an overall address schema, and assigns probabilities based on the similarity of the address characteristics.

Interaction with Mode

The best solution to the challenge of drop points likely depends on the survey mode. As more surveys transition to mixed modes, especially using both mail and in-person outreach, researchers need to have strategies in place to handle drop points consistently.

In-person: An advantage to in-person fieldwork is that the drop points can be enumerated at the selected address with the unit numbering schemas properly identified by interviewers. When interviewers are provided unit addresses, they must correctly identify the sampled unit. If questions arise or the unit cannot be located, they would reach out to their supervisor for instructions. One potential solution would be to provide interviewers with both the unit number from the imputed schema and the sequential number of the unit. Interviewers could then provide additional information in their contact tracking on the actual schema identified at the drop point and the sampled unit’s official number.

Mail: Tracking of returned undeliverable mail and completed interviews would allow further analyses of drop point numbering. A database of receipted undeliverable mail could be used to help determine appropriate numbering approaches for drop points. Changes made to addresses by mail vendors during standardization may also be used for enhancing in future mail samples in an adaptive manner.

Testing Imputation

NORC has implemented sequential numbering and model imputation on a number of ABS projects in an exploratory manner.

Estimate of HIV Risk for Illinois Population (EHRIP) 2023: EHRIP was an Illinois-based public health survey with web push, mailed self-administered questionnaire (SAQ), and telephone follow-ups. EHRIP imputed drop points sequentially. Imputed drop point units had essentially the same rates of response as non-drop point units in similarly sized buildings as the variable was not significant when included in a logistic regression model, implying they were delivered at similar rates.

Estimate of HIV Risk for Illinois Population (EHRIP) 2024: In the subsequent round of the EHRIP survey, a model-based imputation of drop point units was applied based on a Chicago-wide address schema. The imputed results were passed through the edit routine of a commercial bulk mail provider, with only a 5% edit rate. Drop points did have lower match rates to commercial auxiliary data in comparison with both single-family homes and apartments that were non drop points. Drop points also had a substantially higher share of respondents that were Hispanic/Latino or African-American in comparison with other delivery types. While the response rates of the drop point addresses were lower, the effect was entirely mediated by additionally considering apartments vs. single-family home types of addresses. Drop points had the lowest response rates[3] (RR1 = 5.4%) followed by apartments (RR1 = 6.5%), which were both lower than that of the single-family structure units (RR1 = 7.8%).

Religious Landscape Survey (RLS) 2023: NORC conducted a nationally representative ABS mail push-to-web public opinion survey for Pew Research Center. Within Chicago, Boston, and New York City, approximately 14% of sampled units were drop points. NORC implemented sequential imputation and compared the response, contact, and undeliverable rates between drop points and other apartments within each county. For smaller buildings with fewer than 10 units, sequential numbering performed well as drop points had lower rates of undeliverable returns with similar response and contact rates (Χ2 = 33.48, p-value <0.001). The sequential numbering imputation approach was not as successful for larger buildings, as drop points had higher undeliverable rates and lower response and contact rates (Χ2 = 170.78, p-value <0.001) (Table 3).

Table 3.Disposition Rates by Building Size and Unit Type in Chicago, Boston, and New York City
City County Unit type Building size Response rate (RR1) (%) Contact rate (CON1) (%) Undeliverable rate (%) Number of units
Chicago, IL Cook Regular appt Small (<10) 20.4 21.9 10.3 765
Large (10+) 15.7 17.4 10.7 271
DP unit Small 17.9 19.7 6.7 180
Large 0 0 18.2 11
Boston, MA Suffolk Regular appt Small 16.3 17.9 7.3 205
Large 15.7 18.0 8.3 97
DP unit Small 10.7 14.3 0 28
Large 0 0 0 2
New York City, NY Bronx (Bronx) Regular appt Small 9.1 11.6 7.6 487
Large 15.3 17.1 8.4 357
DP unit Small 9.8 13.1 6.2 65
Large 0 0 33.3 6
Kings (Brooklyn) Regular appt Small 13.2 15.4 5.3 1,038
Large 13.3 14.6 5.2 404
DP unit Small 14.0 15.0 2.0 407
Large 0 33.3 50.0 6
New York (Manhattan) Regular appt Small 16.6 18.2 8.9 675
Large 16.6 17.9 4.7 917
DP unit Small 0 0 0 5
Large 7.0 14.1 38.3 115
Queens (Queens) Regular appt Small 11.3 13.6 4.2 546
Large 17.8 19.6 3.9 334
DP unit Small 12.7 14.9 1.4 423
Large 0 100 0 1
Richmond (Staten Island) Regular appt Small 9.3 13.0 6.9 58
Large 6.9 10.3 3.3 30
DP unit Small 20.8 23.4 1.3 78
Large -- -- -- 0
All units in seven counties above 15.1 17.1 6.1 9,207

Summary and Next Steps

Our results thus far indicate that drop points, both anecdotally and empirically, are associated with less stable housing situations and populations that fail to make a digital footprint in consumer databases. An imputation-based approach would be feasible in large scale surveys based on CDS as a sampling frame, whether it be for mail-push-to-web surveys or in-person surveys. Imputation does not seem to affect response rates, particularly for smaller buildings. A limitation in this research is that the two proposed drop point imputation methods are not compared to each other as only one imputation method is used within a study. Instead, response rates from drop points were compared with apartments and single-family households, which could be confounded by the characteristic differences of residents in each unit type.

Since retaining and imputing drop points improves coverage of the residential population, we consider it to be preferable to other potential options, including replacement. Regardless, our research has shown there are several potential approaches to handling drop points in surveys that are preferable to omitting them from the universe. Looking ahead, we plan additional validation on our imputation approach in the coming months.


Corresponding Author Contact Information

Ned English
english-ned@norc.org


  1. The NoStat or “No Statistics” file is an administrative file from the United States Postal Service that contains limited information about non-mailable addresses including drop points (B. Shook-Sa et al. 2013; B. E. Shook-Sa 2014).

  2. NORC uses CDSF under license from Vericast.

  3. AAPOR Response Rate 1

Submitted: July 25, 2025 EDT

Accepted: September 22, 2025 EDT

References

Amaya, A. E. 2017. “RTI International’s Address-Based Sampling Atlas: Drop Points.” RTI Press Publication OP-0047–1712. RTI Press. https:/​/​doi.org/​10.3768/​rtipress.2017.op.0047.1712.
Amaya, A., F. LeClere, L. Fiorio, and N. English. 2014. “Improving the Utility of the DSF Address-Based Frame through Ancillary Information.” Field Methods 26 (1): 70–86. https:/​/​doi.org/​10.1177/​1525822X13516839.
Google Scholar
Dekker, K., A. Amaya, F. LeClere, and N. English. 2012. “Unpacking the DSF in an Attempt to Better Reach the Drop Point Population.” Proceedings of the Joint Statistical Meeting, Section on Survey Research Methods, 4596–4604. http:/​/​www.asasrms.org/​Proceedings/​y2012/​Files/​305686_75228.pdf.
Google Scholar
Dillman, D. A., J. D. Smyth, and L. M. Christian. 2014. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method. United Kingdom: Wiley. https:/​/​doi.org/​10.1002/​9781394260645.
Google Scholar
Harter, R., J. McMichael, and S. G. Deng. 2022. New Approach for Handling Drop Point Addresses in Mail/Web Surveys. RTI Press.
Google Scholar
Kalton, G., J. Kali, and R. Sigman. 2014. “Handling Frame Problems When Address-Based Sampling Is Used for In-Person Household Surveys.” Journal of Survey Statistics and Methodology 2 (3): 283–304. https:/​/​doi.org/​10.1093/​jssam/​smu013.
Google Scholar
Lewis, T., J. McMichael, and C. Looby. 2023. “Evaluating Substitution as a Strategy for Handling U.S. Postal Service Drop Points in Self-Administered Address-Based Sampling Frame Surveys.” Sociological Methodology 53 (1): 158–75. https:/​/​doi.org/​10.1177/​00811750221147525.
Google Scholar
McPhee, C., and D. Dutwin. 2019. “Picking up Drop Points in Address-Based Samples.” Conference Presentation presented at the MAPOR Annual Conference, Chicago, IL, United States.
Shook-Sa, B. E. 2014. “Improving the Efficiency of Address-Based Sampling Frames with the USPS No-Stat File.” Survey Practice 7 (4): 1–10. https:/​/​doi.org/​10.29115/​SP-2014-0018.
Google Scholar
Shook-Sa, Bonnie, Douglas Currivan, Joseph McMichael, and Vincent Iannacchione. 2013. “Extending the Coverage of Address-Based Sampling Frames: Beyond the USPS Computerized Delivery Sequence File.” Public Opinion Quarterly 77 (4): 994–1005. https:/​/​doi.org/​10.1093/​poq/​nft041.
Google Scholar

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

cookies
cookies
cookies
Powered by Scholastica, the modern academic journal management system