Loading [Contrib]/a11y/accessibility-menu.js

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Skip to main content
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Blog
  • Subscribe
  • Author terms & conditions
  • search
  • X (formerly Twitter) (opens in a new tab)
  • RSS feed (opens a modal with a link to feed)

RSS Feed

Enter the URL below into your favorite RSS reader.

https://www.surveypractice.org/feed
ISSN 2168-0094
Articles
January 20, 2026 EDT

Balancing Cost and Response in Longitudinal Surveys: Evidence from a Choice+ Mode Experiment

Erin K Dursa, Ph.D., MPH, April Fales, MS, Hanna Popick, Ph.D., Joseph Gasper, Ph.D., Wendy VanDeKerckhove, MA, Debra Wright, Ph.D., Aaron Schneiderman, Ph.D., MPH, RN, Michele Madden, MPH,
VeteransHealth conditionssurvey recruitmentmixed modecostlongitudinal studiesweb pushmode choice
Copyright Logoccby-nc-nd-4.0 • https://doi.org/10.29115/SP-2025-0019
Photo by sydney Rae on Unsplash
Survey Practice
Dursa, Erin K, April Fales, Hanna Popick, Joseph Gasper, Wendy VanDeKerckhove, Debra Wright, Aaron Schneiderman, and Michele Madden. 2026. “Balancing Cost and Response in Longitudinal Surveys: Evidence from a Choice+ Mode Experiment.” Survey Practice 20 (January). https:/​/​doi.org/​10.29115/​SP-2025-0019.
Download all (2)
  • Download
  • Download

Sorry, something went wrong. Please try again.

If this problem reoccurs, please contact Scholastica Support

Error message:

undefined

View more stats

Abstract

In longitudinal studies, retaining participants over time is necessary to ensure sufficient sample sizes for analyses and reduce the potential for bias; however, efforts to improve response rates can increase the cost of data collection substantially. To explore methods that would best balance data quality and cost for a longitudinal study of veterans deployed during the Gulf War era, we conducted an experiment comparing two multi-mode designs which offered a higher incentive for completing the survey by web. Veterans were randomly assigned to receive one of two protocols: web offered initially with paper and the bonus web incentive introduced later (sequential Choice+), or both web and paper offered at the same time, with the bonus web incentive offered at the beginning (concurrent Choice+). Both protocols offered Computer Assisted Telephone Interviews to non-respondents. We examined the impact of each approach on response rates, sample representativeness, and cost. The response rate for the concurrent Choice+ group was 2.8 percentage points above that of the sequential Choice+ group (47.5 versus 44.7 percent); however, there were no differences in the characteristics of respondents in each group compared with the eligible sample or between respondents in each group on key survey items, an indicator that lower response did not result in biased estimates. The cost to implement the concurrent Choice+ design was substantially higher. These findings suggest that the less costly sequential Choice+ approach may provide a better tradeoff between quality and cost, particularly for longitudinal studies seeking cost efficiencies. Future research may benefit from exploring varying amounts for the web bonus and how a sequential Choice+ protocol could be used to increase response rates for multimode surveys that include web and CATI but not paper.

Background

Longitudinal studies offer a unique opportunity to study the long-term impacts of behaviors, experiences, and interventions on important outcomes. Retaining study participants over time is necessary to ensure sufficient sample sizes and reduce the potential for bias; however, efforts to improve response rates can substantially increase data collection costs. Understanding how to balance collecting high quality, reliable data with cost is critical to the sustainability of such studies. This paper describes a mixed-mode experiment comparing two web/mail designs on response and cost for a longitudinal study of 1990-1991 Gulf War-era veterans, focusing on those that had historically low response.

A variety of strategies have been tested to yield higher response rates while maximizing cost-effectiveness. Offering multiple modes, varying when they are introduced, and using incentives to “push” sample members to complete in specific modes holds promise, though results are mixed. While early research suggested that giving people a choice to respond by web or paper lowered response rates (Dillman, Smyth, and Christian 2014; Gentry and Good 2008; Griffin, Fischer, and Morgan 2001; Grigorian and Hoffer 2008; Medway and Fulton 2012; De Leeuw 2018; Millar and Dillman 2011; Smyth et al. 2010), more recent studies suggest offering modes concurrently yields similar or even higher response rates (Bucks, Couper, and Fulford 2020; Jackson, Medway, and Megra 2023; Medway et al. 2022). Longitudinal surveys can be quite costly, largely due to the upfront expense of tracking people who move (Joshi 2016; Lynn and Lugtig 2017). For this reason, it is important to find efficiencies in survey administration which reduce costs to sustain the study long-term. Cost during data collection is largely driven by mode selection and can be substantially lower if more respondents complete by web than paper (reducing the cost of mailings and processing paper surveys). Offering “Choice+”, in which study participants are given the option to complete the survey on the web or by paper but offered a bonus incentive to complete the survey on the web, can result in higher web response and lower cost-per-complete with positive or neutral impact on response rates (Biemer et al. 2018; Lewis, Freedner, and Looby 2002)

The Gulf War Era Cohort Study (GWECS), sponsored by the U.S. Department of Veterans Affairs (VA), is the largest and longest-running longitudinal cohort study of 1990-1991 Gulf War veterans. It has contributed much of what is known about the health effects of 1990-1991 Gulf War deployment, producing more than 30 peer-reviewed publications. By reinterviewing 30,000 veterans approximately every 10 years, the study provides critical information on chronic medical conditions, mental health conditions, functional impairment, and healthcare utilization. These findings are used to inform healthcare and benefits policy for Gulf War veterans making the maintenance of sufficient sample sizes for longitudinal analyses essential. Historically, paper has been the predominant mode, supplemented with computer-assisted telephone follow-up. To address declining response rates, a web mode was added in the third wave (2012) and proved effective. However, further improvements are needed to maintain response in a cost-efficient manner.

To further explore ways to maintain or improve response for the GWECS, an experiment was embedded into the fourth wave (conducted in 2024) to evaluate the impact of offering study participants Choice+. Two Choice+ designs were compared, one in which web was offered initially with paper introduced later (sequential), and one in which both web and paper were offered at the same time (concurrent) to determine which would result in higher response. Each design was also evaluated for its effect on sample representativeness and associated costs.

Data and Methods

Sample

The full sample for Wave 4 of the GWECS was comprised of 30,000 veterans—15,000 who were deployed to the Persian Gulf between 1990 and 1991 (Gulf War veterans) and 15,000 who served elsewhere during the same period (Gulf Era veterans). These veterans were sampled from the Department of Defense’s Defense Manpower Data Center (DMDC), with representation from each branch of service (Air Force, Army, Marines, Navy). Women, National Guard members, and reserves were oversampled (Kang et al. 2009) to enhance subgroup analyses. Of the original 30,000 in the cohort, a total of 26,580 living veterans were invited to participate in Wave 4.

The experiment included 15,135 veterans who participated in only one or two of the previous three waves. These veterans were randomly assigned to either the concurrent (n = 7,567) or sequential Choice+ group (n = 7,568). This subset was selected based on the premise that partial prior participation, coupled with targeted outreach, might increase their likelihood of engaging in Wave 4. A comparison of this group to the eligible sample revealed some significant differences based on sample characteristics; however, the differences were small (i.e., no more than one percentage point).

Experimental Design and Study Protocol

The experiment was designed to compare response rates and costs between the concurrent Choice+ and sequential Choice+ designs. Veterans in the sample were randomly assigned to two groups with equal probability. The protocols, as shown in Figure 1, differed only for the first mailing: the sequential Choice+ group received a letter with instructions for completing the survey on the web and a $2 prepaid cash incentive (a paper survey was not provided until second full mailing). This group was offered a $20 postpaid incentive for completing the survey on the web. The concurrent Choice+ group received the same instructions for completing the survey on the web and a paper copy of the survey in the initial mailing along with a postage-paid return envelope, and a $2 prepaid cash incentive. This group was offered a $20 postpaid incentive for completing the survey on paper, and a $20 bonus incentive ($40 total) for completing the survey on the web. Because the web bonus was offered from the beginning of data collection for the concurrent Choice+ group and when the paper survey was introduced for the sequential Choice+ group, the experiment is not a strict test of concurrent and sequential designs. Rather, it is a test of a concurrent design that offered a web bonus versus a sequential design that offered a web bonus when the paper survey was introduced. Veterans in both groups who had not responded by week 10 were contacted by telephone. They were offered a $20 postpaid incentive for completing the survey via Computer Assisted Telephone Interviewing (CATI).

Analysis Plan and Statistical Methods

Response rates. To determine whether response rates differed significantly between the groups, unweighted response rates were compared both cumulatively at the end of data collection and for each week. Response rates were calculated using the American Association for Public Opinion Research’s RR6.[1] A chi-square test was used to test for significant differences in the unweighted response rates at the end of data collection.

Nonresponse bias. Respondents in both groups were compared with eligible members of the sampling frame on basic demographics (sex, age, and marital status) and military characteristics (deployment status, military rank, branch, and type of service) at the time of the Gulf War to determine if respondents were over- or under-represented in each design. Differences between respondents and eligible members of the sampling frame were tested using t-tests.

Survey estimates of socio-economic factors and health conditions. To determine whether the concurrent Choice+ or sequential Choice+ designs resulted in significantly different survey estimates, several key estimates of particular interest to VA health researchers were examined. Estimates of self-reported health conditions were compared across groups, including indicators of general health, chronic medical conditions, mental health conditions, alcohol and drug dependence, and cigarette smoking. All estimates were weighted using base weights reflecting selection probabilities,[2] with variances computed using the jackknife replication (JKn) method. Differences between groups were tested using the Rao-Scott chi-square test. Survey estimates among respondents were also compared on two socio-economic factors: educational attainment (high school or below, some college or associate’s degree, bachelor’s degree, graduate or professional degree) and household income.

Survey cost. The cost of each protocol was evaluated by calculating the cost-per-complete. Costs included printing, assembling, and mailing surveys (including outbound and return postage); labor associated with scanning paper surveys; interviewer labor associated with CATI nonresponse calls for each protocol, and the cost of incentives. Total costs for each protocol were divided by the number of respondents to derive the cost per completed survey.

Many data collection costs were fixed and applicable to both groups. The cost analysis therefore focused on direct costs specific to recruiting participants and collecting data. For example, the time to develop and program each survey mode (since this cost was the same across groups) were excluded, but did include the time needed to scan and confirm the data captured for each paper survey returned (since the number of returns varied by condition).

Results

Response Rate Comparison

The final response rate for the concurrent Choice+ group was 2.8 percentage points above that of the sequential Choice+ group (47.5 versus 44.7 percent, p < 0.01). Although response was initially higher in the sequential group during the first two weeks of data collection, the receipt of paper surveys in the concurrent group shifted the trend. From that point forward, response rates in the concurrent group remained higher than those in the sequential group for the remainder of the data collection (see Figure 2).

The majority of veterans chose to respond to the survey via web, regardless of experimental group (see Table 1). However, the percentage of respondents choosing to respond by web was 10.4 percentage points higher in the sequential Choice+ group than in the concurrent Choice+ group.

The concurrent Choice+ group was offered the bonus incentive for web from Week 1, while the sequential Choice+ group was not offered the bonus until Week 5. Still, the sequential Choice+ group maintained a higher web response rate throughout data collection: consistently between one and three percentage points higher than the concurrent Choice+ group. In Week 3, we began receiving paper surveys from the concurrent Choice+ group. Because the sequential group did not have an opportunity to respond by paper until the Week 5 mailing, paper surveys were not received from that group until Week 7. The concurrent group’s paper response rate remained higher than the sequential group’s by about six to seven percentage points throughout data collection.

Table 1.Percent responding by mode
Mode Concurrent Choice+ Sequential Choice+
Web 63.2 73.6
Paper 30.6 19.4
CATI 6.1 7.0
Total respondents 3,583 3,376

Nonresponse Bias

Comparisons of respondents in both groups to eligible members in the sampling frame indicate that patterns of nonresponse were similar for both the concurrent Choice+ and sequential Choice+ groups (see Table 2).

Table 2.Characteristics of respondents compared with sample by experimental group
Characteristic Concurrent Choice+ Sequential Choice+
% in eligible sample % in responding sample % in eligible sample % in responding sample
Sex
Male 89.4
(0.272)
90.5**
(0.401)
89.0
(0.352)
89.9*
(0.423)
Female 10.6
(0.272)
9.5**
(0.401)
11.0
(0.352)
10.1*
(0.423)
Deployment status
Deployed 46.7
(0.551)
49.6**
(0.935)
47.9
(0.747)
50.1*
(1.202)
Non-Deployed 53.3
(0.551)
50.4**
(0.935)
52.1
(0.747)
49.9*
(1.202)
Age in 1991a
17–25 46.0
(0.843)
40.9**
(1.182)
46.0
(0.822)
39.8**
(1.165)
26–32 30.6
(0.729)
30.5
(1.079)
30.1
(0.673)
31.2
(1.049)
33–39 16.0
(0.590)
19.5**
(0.865)
15.9
(0.577)
19.0**
(0.866)
40 and older 7.4
(0.333)
9.1**
(0.547)
8.0
(0.377)
10.0**
(0.591)
Rank in 1991
Enlisted 87.8
(0.539)
85.1**
(0.799)
86.7
(0.528)
84.1**
(0.865)
Officer 11.1
(0.517)
13.3**
(0.766)
12.2
(0.515)
14.4**
(0.817)
Warrant 1.1
(0.153)
1.6*
(0.262)
1.1
(0.186)
1.5
(0.063)
Race/Ethnicitya
Black 22.6
(0.636)
22.2
(1.011)
20.2
(0.594)
19.2
(0.898)
Hispanic 5.0
(0.333)
4.8
(0.448)
4.8
(0.353)
4.2
(0.469)
Other 4.5
(0.298)
3.6*
(0.373)
4.4
(0.284)
4.3
(0.468)
White 68.0
(0.740)
69.3
(1.016)
70.7
(0.730)
72.4*
(1.066)
Branch
Air Force 11.9
(0.509)
12.7
(0.622)
12.7
(0.643)
14.2*
(0.597)
Army 52.1
(0.548)
53.2
(0.938)
50.6
(0.746)
50.8
(1.209)
Marines 15.3
(0.391)
14.7
(0.679)
15.7
(0.684)
14.9
(0.892)
Navy 20.7
(0.420)
19.5*
(0.780)
21.0
(0.497)
20.1
(1.182)
Type of service
Active 79.2
(0.322)
79.6
(0.526)
78.3
(0.388)
79.0
(0.648)
Guard 7.5
(0.155)
7.6
(0.227)
7.8
(0.188)
7.7
(0.299)
Reserve 13.3
(0.268)
12.8
(0.441)
13.9
(0.288)
13.4
(0.513)
Marital status in 1991a
Married 53.4
(0.682)
58.2**
(0.963)
52.6
(0.707)
58.4**
(1.015)
Other 3.5
(0.252)
3.6
(0.354)
3.6
(0.298)
3.6
(0.373)
Single 43.1
(0.664)
38.2**
(1.006)
43.8
(0.690)
38.0**
(1.034)

Note: Values may sum to more than 100 percent because of rounding.
a = Excludes Veterans with missing/unknown values
s.e. in parentheses
p values p-values are for the comparison of the eligible and responding samples.
** = p <0.001
* = p <.05

Survey Estimates of Socio-Economic Factors and Health Conditions.

We compared socio-economic factors between respondents in the concurrent Choice+ and sequential Choice+ designs using chi-square tests. Respondents did not differ on educational attainment and household income (see Table 3).

Table 3.Comparison of key survey socio-economic variables
Survey variable Concurrent Choice+ Sequential Choice+
Percentage
Educational attainment
1: High school or below 17.1 15.9
2: Some college or associate’s degree 42.8 44.1
3: Bachelor’s degree 19.3 19.6
4: Graduate or professional degree 20.9 20.4
Household income
1: $0–$34,999 11.3 10.2
2: $35,000–$49,999 9.4 10.6
3: $50,000–$74,999 21.2 20.3
4: $75,000–$99,999 18.4 17.5
5: $100,000+ 39.7 41.5

Note: Values may sum to more than 100 percent because of rounding.

The chi-square tests on key survey health estimates indicated no significant differences in the percentage distributions between the concurrent Choice+ and sequential Choice+ respondents for most estimates (see Table 4). The difference in distribution of alcohol use quantity was statistically significantly different; however, the absolute differences were relatively small.

Table 4.Comparisons of key survey health estimates
Survey variable Concurrent Choice+ Sequential Choice+
Percentage
General health
Excellent 3.1 3.4
Very good 17.6 17.4
Good 37.7 38.5
Fair 33.6 32.4
Poor 7.9 8.3
PTSD
Yes 31.6 30.9
No 68.4 69.1
Gulf War Illness
Yes 12.4 11.9
No 87.6 88.1
Bipolar disorder or manic depression
Yes 6.6 6.8
No 93.4 93.2
Traumatic brain injury
Yes 5.6 6.4
No 94.4 93.6
COPD
Yes 11.9 11.9
No 88.1 88.1
Hypertension
Yes 59.5 60.5
No 40.5 39.5
Sleep apnea
Yes 45.7 46.5
No 54.3 53.5
Alcohol use–frequency
Never 26.7 25.7
Monthly or less 24.9 25.1
Two to four times a month 17.5 17.2
Two to three times a week 13.9 14.9
Four or more times a week 17.1 17.1
Alcohol use–quantity (p <.05)
None 29.5 28.6
1 or 2 drinks 44.4 45.9
3 or 4 drinks 15.7 16.4
5 or 6 drinks 6.9 4.9
7 or more drinks 3.6 4.2
Alcohol or drug dependence
Yes 11.0 11.3
No 89.0 88.7
Smoking–frequency
0 days smoked 89.3 88.9
1 to 29 days smoked 2.1 2.6
30 days smoked 8.6 8.5
Smoking–quantity
Did not smoke in the past 30 days 89.6 89
1 to 10 cigarettes per day 5.0 5.3
11 or more cigarettes per day 5.4 5.7

Note: Values may not sum to 100 percent because of rounding.

Comparison of Costs

The cost-per-complete for the concurrent Choice+ protocol was 16 percentage points higher than for the sequential Choice+ protocol. The largest differences were in the cost of processing paper surveys and incentives. Processing paper surveys was more expensive per complete for the concurrent Choice+ group because veterans in this group returned a higher proportion of paper surveys. Incentive costs were higher for the concurrent Choice+ group because this group received $40 to complete the survey by web from the beginning of data collection (Week 1), while the sequential Choice+ group only received $20 to complete the survey by web until Week 5, when they received the additional $20 bonus. At this stage, however, many had already completed the survey via web for the lower incentive amount. Nonetheless, the cost-per-complete was still higher among the concurrent Choice+ group even with incentives excluded, although the difference between concurrent and sequential group costs was smaller (a relative increase of 9 percent). Table 5 shows the ratio of cost-per-complete in the two protocols for each of the data collection categories as well as the total.

Table 5.Relative cost-per-complete of concurrent Choice+ compared with sequential Choice+ protocol
Ratio of costs (concurrent compared with sequential)
Mailing materials and labor 1.1
Postage 1.2
Processing paper surveys 1.5
CATI interviews 0.9
Incentives 1.2
Total 1.2

Discussion and Conclusions

Overall, the concurrent Choice+ design produced a modestly higher response rate than the sequential Choice+ design. However, there were no detectable differences in the characteristics of respondents in each group compared with the eligible sample or between respondents in each group on key survey items, suggesting the lower response did not introduce bias into estimates. At the same time, the cost to implement the concurrent Choice+ design was substantially higher per complete than costs to implement the sequential Choice+ design, with incentives and the cost of processing paper surveys accounting for the largest increase. This suggests that the less costly sequential Choice+ approach may be the preferred approach for subsequent waves.

In addition to mode order, a difference in the bonus incentive between the groups could explain the differences in response rates. The concurrent Choice+ group was offered the web bonus from the beginning of data collection, whereas the sequential Choice+ group was offered the web bonus only after the paper survey was introduced. Offering the web bonus from the beginning of the data collection for the concurrent Choice+ group may have contributed to the higher response rate for that group. If the web bonus had been available from the beginning of data collection in the sequential Choice+ group, it seems likely that the already modest difference in response rates may have been attenuated. This further underscores the conclusion that the less costly sequential Choice+ design is preferable given the lower cost of processing paper surveys and lack of evidence of bias in the estimates.

Declining response to surveys and changes in respondent behavior requires continual exploration and testing of designs that will engage participants and reduce burden without sacrificing the quality of the data collected. In longitudinal studies, respondent preferences may change over time, making it important to continue to evaluate the trade-offs of various approaches. Veterans were nearly 3 times more likely to respond by web in Wave 4 of this study than in the prior wave conducted in 2012. Some of this increase may be due to increasing comfort with using the internet that veterans have gained over the last decade, making web a more viable option.

Future research may benefit from exploring different elements of the sequential Choice+ design itself—for example, varying amounts for the web bonus and other incentives to determine the best balance of cost and response. Some surveys may be unable to include a paper option because of complex skip patterns or sensitive questions. Research should consider how a sequential Choice+ protocol could be used to increase response rates for multimodal surveys that include web and CATI but not paper. This additional research may allow researchers to determine which factors can be adjusted to best balance encouraging web participation and maintaining data quality with cost.


Corresponding author contact information

Erin K. Dursa, PhD, MPH
Erin.Dursa2@va.gov
810 Vermont Ave NW
Mailstop 12HOME
Washington, DC, 20420


  1. \(RR6 = \ \frac{(I + P)}{(I + P) + (R + NC + O)}\), where I = complete interview, P = partial interview, R = refusal and break-off, NC = non-contact, and O = other eligible nonresponse (AAPOR, 2023). RR6 assumed that all veterans who could not be contacted were eligible and includes partially completed questionnaires. For this study, questionnaires were considered complete if 80% or more of the questions were answered and partially complete if 50% to 79% of the questions were answered.

  2. We repeated the comparison of survey estimates using final weights. The conclusions were the same and so are not presented here.

Submitted: October 01, 2025 EDT

Accepted: November 20, 2025 EDT

References

American Association for Public Opinion Research (AAPOR). 2023. 2023 Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. 10th ed. https:/​/​aapor.org/​wp-content/​uploads/​2024/​03/​Standards-Definitions-10th-edition.pdf.
Google Scholar
Biemer, Paul B., Joe Murphy, Stephanie Zimmer, Chip Berry, Grace Deng, and Katie Lewis. 2018. “Using Bonus Monetary Incentives to Encourage Web Response in Mixed-Mode Household Surveys.” Journal of Survey Statistics and Methodology 6 (2): 240–61. https:/​/​doi.org/​10.1093/​jssam/​smx015.
Google Scholar
Bucks, Brian, Mick P. Couper, and Scott L. Fulford. 2020. “A Mixed-Mode and Incentive Experiment Using Administrative Data.” Journal of Survey Statistics and Methodology 8 (2): 352–69. https:/​/​doi.org/​10.1093/​jssam/​smz005.
Google Scholar
De Leeuw, Edith D. 2018. “Mixed-Mode: Past, Present, and Future.” Survey Research Methods 12 (2): 75–89. https:/​/​doi.org/​10.18148/​srm/​2018.v12i2.7402.
Google Scholar
Dillman, Don A., Jolene D. Smyth, and Leah Melani Christian. 2014. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method. 4th ed. Hoboken, NJ: John Wiley & Sons, Inc. https:/​/​doi.org/​10.1002/​9781394260645.
Google Scholar
Gentry, Robin, and Cindy Good. 2008. “Offering Respondents a Choice of Survey Mode: Use Patterns of an Internet Response Option in a Mail Survey.” Paper presented at the American Association for Public Opinion Research Conference, New Orleans, LA, May 17.
Griffin, Deborah H., Donald P. Fischer, and Michael T. Morgan. 2001. “Testing an Internet Response Option for the American Community Survey.” Paper presented at the American Association for Public Opinion Research Conference, Montreal, QC, Canada, May 18.
Grigorian, Karen, and Tom B. Hoffer. 2008. “2006 Survey of Doctorate Recipients Mode Assignment Analysis Report.” National Science Foundation.
Jackson, Michael T., Rebecca L. Medway, and Maahi W. Megra. 2023. “Can Appended Auxiliary Data Be Used to Tailor the Offered Response Mode in Cross-Sectional Studies? Evidence from an Address-Based Sample.” Journal of Survey Statistics and Methodology 11 (1): 47–74. https:/​/​doi.org/​10.1093/​jssam/​smab023.
Google Scholar
Joshi, Heather. 2016. “Why Do We Need Longitudinal Survey Data?” IZA World Labor 308. https:/​/​doi.org/​10.15185/​izawol.308.
Google Scholar
Kang, Han K., Bo Li, Clare M. Mahan, Seth A. Eisen, and Charles C. Engel. 2009. “Health of US Veterans of 1991 Gulf War: A Follow-up Survey in 10 Years.” Journal of Occupational and Environmental Medicine 51 (4): 401–10. https:/​/​doi.org/​10.1097/​JOM.0b013e3181a2feeb.
Google Scholar
Lewis, Taylor, Naomi Freedner, and Charlotte Looby. 2002. “An Experiment Comparing Concurrent and Sequential Choice+ Mixed-Mode Data Collection Protocols in a Self-Administered Health Survey.” Survey Methods: Insights from the Field. https:/​/​doi.org/​10.13094/​SMIF-2022-00004.
Google Scholar
Lynn, Peter, and Peter Lugtig. 2017. “Total Survey Error for Longitudinal Surveys.” In Total Survey Error in Practice, edited by Paul P. Biemer, Edith de Leeuw, Stephanie Eckman, Brad Edwards, Frauke Kreuter, Lars E. Lyberg, N. Clyde Tucker, and Brady T. West. Hoboken, NJ: John Wiley & Sons, Inc. https:/​/​doi.org/​10.1002/​9781119041702.
Google Scholar
Medway, Rebecca L., and Jenna Fulton. 2012. “When More Gets You Less: A Meta-Analysis of the Effect of Concurrent Web Options on Mail Survey Response Rates.” Public Opinion Quarterly 75 (4): 733–46. https:/​/​doi.org/​10.1093/​poq/​nfs047.
Google Scholar
Medway, Rebecca L., Maahi W. Megra, Michael Jackson, Zoe Padgett, and Danielle Battle. 2022. “National Household Education Surveys Program of 2019: Methodological Experiments Report.” NCES 2022001. National Center for Education Statistics.
Millar, M. Morgan, and Don A. Dillman. 2011. “Improving Response to Web and Mixed-Mode Surveys.” Public Opinion Quarterly 75 (2): 249–69. https:/​/​doi.org/​10.1093/​poq/​nfr003.
Google Scholar
Smyth, Jolene D., Don A. Dillman, Leah Melani Christian, and Allison C. O’Neill. 2010. “Using the Internet to Survey Small Towns and Communities: Limitations and Possibilities in the Early 21st Century.” American Behavioral Scientist 53 (9): 1423–48. https:/​/​doi.org/​10.1177/​0002764210361695.
Google Scholar

Powered by Scholastica, the modern academic journal management system