Loading [Contrib]/a11y/accessibility-menu.js
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • search

    Sorry, something went wrong. Please try your search again.
    ×

    • Articles
    • Blog posts

RSS Feed

Enter the URL below into your favorite RSS reader.

https://www.surveypractice.org/feed
×
Articles
Vol. 4, Issue 5, 2011September 30, 2011 EDT

A Direct Comparison of ABS and Telephone Sampling in a Pilot Study of Children’s Health

Mary Losch, Peter Damiano, Jean Willard, Anne Bonsall Hoekstra, Ki Park, Duoc Nguyen,
survey practice
• https://doi.org/10.29115/SP-2011-0022
Survey Practice
Losch, Mary, Peter Damiano, Jean Willard, Anne Bonsall Hoekstra, Ki Park, and Duoc Nguyen. 2011. “A Direct Comparison of ABS and Telephone Sampling in a Pilot Study of Children’s Health.” Survey Practice 4 (5). https://doi.org/10.29115/SP-2011-0022.
Save article as...▾
  • PDF
  • XML
  • Citation (BibTeX)
Data Sets/Files (5)
Download all (5)
  • Table 1  Comparison of sample sizes, costs, response rates, and field periods.
    Download
  • Table 2  Coverage comparisons: Ethnicity, race, income, education, & absence of landline.
    Download
  • Table 3  Survey response comparisons.
    Download
  • Table 4  Coverage comparisons × mode: Ethnicity, race, income, education, & absence of landline.
    Download
  • Table 5  Survey response comparisons × mode.
    Download

Sorry, something went wrong. Please try again.

If this problem reoccurs, please contact Scholastica Support

Error message:

undefined
×

View more stats

Abstract

A Direct Comparison of ABS and Telephone Sampling in a Pilot Study of Children’s Health

This pilot study was designed to determine the relative strengths and weaknesses of ABS (mixed-mode web/telephone) as a sampling alternative to list-assisted RDD in a statewide study of a targeted population study of children’s health. Additionally, the pilot served as a feasibility study aimed at determining whether ABS can be used effectively for small studies (relative to national-scale data collection) that do not rely on substantial budgets. The results of this pilot study provide support for the contention that ABS sampling improves coverage (i.e., reduces coverage error) when compared to traditional RDD sampling.

Background

In the early 2000s, articles examining addressed-based alternatives to traditional approaches to sampling began to appear in the literature. Early uses of the United States Postal Service (USPS) Delivery Sequence File (DSF) were focused on the search for more cost-effective approaches to on-site enumeration for in-person surveys (e.g., Iannacchione, Staab, and Redden 2003). Survey scientists at RTI (Iannacchione and colleagues) and at NORC conducted several extensive comparisons of address-based samples and household unit (HU) field enumeration (see O’Muircheartaigh, Eckman, and Weiss 2002; O’Muircheartaigh et al. 2005). These findings supported the value of the DSF and address-based samples as a less expensive sampling alternative.

Falling response rates and more recently, growing concerns about coverage bias and the complexity of combining cell phone samples with landline samples have resulted in several alternatives to random digit dialing (RDD) sampling for telephone surveys. Although not the only sampling alternative to concerns about coverage bias in telephone surveys (e.g., Guterbock et al. 2008; Lambert, Langer, and McMenemy 2010), address-based sampling (ABS) has received the most attention as a possible alternative to traditional RDD sampling for population-based surveys.

Extending the early work at RTI and NORC, Michael Link and his colleagues have conducted several large studies that expand the knowledge base about the feasibility and strengths of ABS for probability population surveys (e.g., M. Link et al. 2005; M. W. Link et al. 2008). In addition, a number of large national studies are either now testing or have included ABS in their sample designs for some or all of their data collection. Examples include the CDC’s REACH U.S. survey, conducted by NORC (see Barron 2009, for detailed information), the American National Election Studies, and the National Cancer Institute’s Health Information National Trends Study.

Comparing ABS and RDD in a Statewide Study of Children’s Health

The findings to date suggest that ABS holds promise as a valuable alternative to RDD sampling. However, there are aspects of ABS sampling that may limit its utility — especially in those cases where resources and field periods are limited. ABS requires at least one mail contact for each case and unless a relatively low-cost data collection mode is offered (e.g., web, self-administered mailback), the costs of adding programming, materials, labor for mailing preparation, and postage can be prohibitive. In addition, if a mixed- or multi-mode approach is adopted, additional field time is needed to allow for the completion of web or self-administered questionnaires before beginning the telephone data collection. An additional question that requires examination is the practicality of ABS designs for studies with modest budgets where only a subset of the population is targeted and the group cannot be identified through location such as geocoding.

Methods

In April 2010, the University of Northern Iowa Center for Social and Behavioral Research (CSBR) conducted a pilot study of children’s health using both list-assisted RDD and ABS sampling. The population of interest was households with children under the age of 18. The pilot was completed prior to launching the 2010 statewide study which was the third survey of children’s health needs conducted by the University of Iowa Public Policy Center in collaboration with the Iowa Department of Public Health. The goal of the pilot was to provide approximately 300 completed interviews for each sampling design. To reduce anticipated costs, a web mode was added to the ABS sampling. Based on the estimate of 25% of households having a child under 18 and an anticipated response rate of 40%, an ABS sample frame of 3000 addresses was purchased from Marketing Systems Group (MSG). A list-assisted RDD sample of approximately 7,000 numbers, including a portion of households targeted for children under 18 was purchased from Marketing Systems Group (MSG) and used for the telephone group. This approach was consistent with sampling designs used in earlier data collection efforts associated with this study series. In order to test the actual yield of the ABS sample, no differential targeting of households was included.

ABS Sample. In the ABS group, a packet was mailed to each of the 3000 addresses on April 6th, 2010. A unique 6-character alphanumeric code was assigned to each household address for tracking. The packet included an information letter with instructions for completing a web questionnaire (including a unique web access code) and information indicating that we would call if the web mode was not accessed within the next week or so. A contact information card (with access code) was also included to indicate eligibility (child under 18 in home), phone number preference and preferred call times. A business reply envelope was included for return of the contact/eligibility card. A reminder postcard was mailed to those who did not return the contact sheet or complete the web option within 10 days.

The ABS sample yielded the following profile with regard to phone numbers appended and provided:

30% — No Number (No phone in sample or provided on contact card)

69% — No Preferred Number (Phone in sample was only phone number available)

1% — Same Number (Number in sample matched preferred number provided on contact card)

0.5% — Provided number or provided different number

About 10 days following the initial mailing, ABS cases were moved to the CATI facility for phone follow-up if no web response was provided. Because residential addresses were the unit of analysis, and because generic “[CITY] Household” was used for the labels, undeliverable packets (n=163) were removed from further contact and considered ineligible. All phone numbers received up to 15 attempts to reach a final disposition (15 for those who returned contact/eligibility cards and 10 for all others). Mean number of attempts was 4.3. Average interview length was 17.6 minutes. The ABS data collection was ended when all available phone numbers were maximally attempted.

List-Assisted RDD Sample. Data collection for the RDD group began on April 19th, 2010. Sample was released in replicates and less than a third of the RDD sample numbers were required to reach the goal of 300 completed interviews. All phone numbers received up to 10 attempts to reach a final disposition. Mean number of attempts was 3.6. Average interview length was 16.9 minutes.

Results

Sample sizes, costs, response rates, and field periods. Table 1 summarizes the comparisons between the ABS and RDD designs. The RDD sample yielded more interviews overall. The ABS design fell slightly short of the goal of 300 completed interviews. The costs of the ABS approach were much higher than those of RDD in this pilot. In addition to the costs associated with the mailing packet (materials, postage, mailing prep) and web programming, substantial administrative costs were incurred to manage the day-to-day input of information from contact cards and the database regarding web completes, updating phone numbers and scheduling follow-up calls. The bulk of the 75% difference in the cost comparison was in the staff category of salaries/wages.

Table 1
Comparison Category RDD ABS
Completed Interviews 339 279
Cost Per Interview $38 $67
RR3/Coop3 44 / 84 46 / 86
Field Period 4/19 – 5/12 (24 days) 4/6 – 5/13 (38 days)

Using the AAPOR RR3 and COOP 3 calculations, the ABS design yielded slightly higher response and cooperation overall. At 38 days in the field, the ABS time frame was 82% longer than that of the RDD design.

Coverage. As found in other studies, the ABS design did provide greater coverage of the population. Table 2 summarizes key coverage comparisons in child’s race/ethnicity, household income, education and absence of a landline telephone. All results are unweighted values. The ABS sample yielded greater racial and ethnic diversity, a greater percentage of lower income respondents, more respondents with lower educational attainment, and more cell-phone only households. Notable differences were seen in race (ABS 5 percentage points lower than RDD for white respondents), percent reporting incomes over $80,000 (ABS 13 percentage points lower than RDD), percent reporting less than a high school education, and absence of a landline (6% of ABS respondents reported no landline).

Table 2
Coverage RDD ABS
Hispanic Ethnicity 3% 5%
White Race 97% 92%
Household Income $80,000+ 52% 39%
Household Income <$25,000 6% 11%
≤ HS Education 15% 20%
Absence of Landline 0% 6%

Response Comparisons. A number of variables were selected for design comparison. The variables are functional health status, chronic condition diagnosis, activity level, services used, access to care, preventive care in previous year, and prescription medications in previous year. All values reflect unweighted data.

Table 3 provides a summary of the RDD-ABS comparison for the selected variables.

Table 3
Variable RDD ABS
Health Status (Excellent) 67% 65%
Chronic Condition (Yes) 12% 11%
Mean hours of TV/video per day 1.5 1.6
Does Child Need/Use More Medial Care than Others? (Yes) 11% 14%
Does Child Have Doctor/Nurse? (Yes) 91% 92%
Any Time the Child Could Not Get Care in the Last 12 Months? (Yes) 2% 2%
Last Preventive Care Visit within 12 Months (Yes) 88% 85%
Prescription Medication Needed in the Last 12 Months (Yes) 35% 41%

With the exception of prescription medication needed in the last year, no variables showed marked differences that fell outside the expected sampling variability for each group.

Mode Comparison. Because the ABS design incorporated mixed modes (phone and web), both the coverage and the response were also compared by mode. As shown in Tables 4 and 5, mode effects are suggested for some coverage dimensions and some response variables but not others.

Table 4
Coverage ABS Web ABS Phone
Hispanic Ethnicity 5% 5%
White Race 92% 92%
Household Income $80,000+ 45% 37%
Household Income <$25,000 7% 14%
≤ HS Education 10% 25%
Absence of Landline 14% 3%
Table 5
Variable ABS Web ABS Phone
Health Status (Excellent) 67% 64%
Chronic Condition (Yes) 11% 11%
Mean hours of TV/video per day 1.7 1.6
Does Child Need/Use More Medial Care than Others? (Yes) 10% 16%
Does Child Have Doctor/Nurse? (Yes) 91% 93%
Any Time the Child Could Not Get Care in the Last 12 Months? (Yes) 3% 2%
Last Preventive Care Visit within 12 Months (Yes) 79% 88%
Prescription Medication Needed in the Last 12 Months (Yes) 42% 40%

Those in the ABS phone group were more likely than those completing on the web to report that their child needed or used more medical care than other children and were much more likely to report that the child had received a preventive care check-up or immunizations in the last 12 months.

Conclusions

Although requiring more financial resources and time, ABS did prove to be a workable alternative for a statewide survey. Moreover, ABS also proved to be a feasible approach for a study targeting only a subset of households. In this case, the target was estimated at about 25% of all households. Screening via contact card, web and phone did not result in prohibitive costs given the benefit of decreased coverage error. The increase in time required for data collection is a drawback that must be considered when choosing sampling approaches. If data must be collected quickly, then ABS may be problematic.

Mode comparisons resulted on comparable estimates overall. Because there was no random assignment to mode in this study, it was impossible to fully separate mode effects from population selection effects, and no attempt was made to systematically evaluate the mixed-mode effect (e.g., Vannieuwenhuyze, Loosveldt, and Molenberghs 2010). However, because there were notable differences in income and education between the web and phone respondents, it is likely that at least some of the contrast evident in some items was a function of sampling error (respondent selection of mode) rather than measurement error (mode) per se.

In summary, the ABS design is a workable approach for modest statewide projects and can be used successfully even for projects targeting a subset of the population. ABS sampling yielded benefit in the form of reduced coverage error. However, the increased costs incurred by the approach included materials, staff wages/salaries, and increased time in the field. Importantly, the addition of one or more modes within the ABS context inserts an additional level of complexity; this may impart new measurement error in the form of mode effects. Although not unique to ABS, potential mode effects must be a consideration when survey design alternatives are decided for each project. Future investigations should examine the feasibility of ABS for other subpopulations with larger sample sizes to provide increased power for comparisons.

References

Barron, M. 2009. “Multi-Mode Surveys Using Address-Based Sampling: The Design of the REACH U.S. Risk Factor Survey.” In The American Statistical Association [CD-ROM]. Alexandria, VA: American Statistical Association. http://www.amstat.org/Sections/Srms/Proceedings/y2009/Files/400055.pdf.
Google Scholar
Guterbock, T.M., J. Ellis, A. Diop, K. Le, and J.L. Holmes. 2008. “Who Needs RDD: Combining Directory Listings with Cell Phone Exchanges for an Alternative Sampling Frame.” In The Annual Meetings of the American Association for Public Opinion Research. New Orleans.
Google Scholar
Iannacchione, V., J. Staab, and D. Redden. 2003. “Evaluating the Use of Residential Mailing Addresses in a Metropolitan Household Survey.” Public Opinion Quarterly 67: 202–10.
Google Scholar
Lambert, D., G. Langer, and M. McMenemy. 2010. “Cell-Phone Sampling: An Alternative Approach.” In The Annual Conference of the American Association for Public Opinion Research. Chicago, IL.
Google Scholar
Link, M., M. Battaglia, M. Frankel, L. Osborn, and A. Mokdad. 2005. “Address-Based versus Random-Digit Dial Sampling: Comparison of Data Quality from BRFSS Mail and Telephone Surveys.” In Proceedings of the 2005 Federal Committee on Statistical Methodology Research Conference [CD-ROM]. Arlington, Virginia: Federal Committee on Statistical Methodology.
Google Scholar
Link, M. W., M. P. Battaglia, M. R. Frankel, L. Osborn, and A. H. Mokdad. 2008. “A Comparison of Address-Based Sampling (ABS) Versus Random-Digit Dialing (RDD) for General Population Surveys.” Public Opinion Quarterly 72 (1): 6–27. https://doi.org/10.1093/poq/nfn003.
Google Scholar
O’Muircheartaigh, C., S. Eckman, N. English, J. Lepkowski, and S. Heeringa. 2005. “Comparison of Traditional Listings and USPS Address Database as a Frame for National Area Probability Samples.” In The American Association for Public Opinion Research Conference. Miami Beach, FL.
Google Scholar
O’Muircheartaigh, C., S. Eckman, and C. Weiss. 2002. “Traditional and Enhanced Field Listing for Probability Sampling.” In Proceedings of the American Statistical Association [CD-ROM]. Alexandria, VA: American Statistical Association.
Google Scholar
Vannieuwenhuyze, J., G. Loosveldt, and G. Molenberghs. 2010. “A Method for Evaluating Mode Effects in Mixed-Mode Surveys.” Public Opinion Quarterly 74 (5): 1027–45.
Google Scholar

Powered by Scholastica, the modern academic journal management system