When conducting mail and mail-back surveys, market research firms need to estimate response rates in advance to be able to estimate the total expected reusable sample size resulting from a number of mailed questionnaires, and hence budget their study. We attempted to find an efficient and objective way of rating the response burden in order to assess the expected response burden from a given survey in advance.
Over the past decades, there has been a vast body of literature discussing response rates, the factors influencing them, and the various impacts on survey quality. While the literature on survey methods (see Richardson et al. 1995 or Dillman 2000 for relevant textbooks) discusses response burden, there seems to be very little literature on its ex-ante assessment, nor on predicting response rates from the burden.
Heberlein and Baumgartner (1978) present several factors that were found to influence response rates to mailed questionnaires. They find that response burden, approximated by the number of pages (or questionnaire length), has a significant influence on response rate (a similar approach is used in Bruvold and Comer 1988). However, they do not further differentiate the response burden by accounting for the complexity of the posed questions. They also find that other factors, such as the saliency of the survey content and incentives given to the respondents, all have an influence on the outcome (for a description of the so called leverage-saliency theory, see Groves et al. 2000).
Leverage-salience theory, which suggests that non-results in biased study results, if the decision to participate in a survey is influenced by the respondents’ interest in the survey topic, plays a major role in the assessment of such studies (Groves, Presser, and Dipko 2004). However, the degree of this effect and its influence on the actual response rates that are the subject of discussion here are difficult to quantify. As the studies used in our meta-analysis all stem from the same research field (transportation), we expect saliency to have influenced them all to the same degree, thus not biasing the results that will be discussed here.
Other meta-analyses of survey response rates include:
- Fox et al. (1988), who explore ways of increasing response rates, among others by reducing questionnaire length and providing the respondents with incentives;
- Church (1993), who also attempts to estimate the effect of incentives;
- Asch et al. (1997), who examine response to mail surveys in the medical field and find differences across disciplines and a positive effect of mail, respectively telephone, reminders;
- Cook et al. (2000), who examine response rates of Internet based surveys and find that survey length does not have a significant effect;
- Kaplowitz et al. (2004), who compare response rates of Web and mail based surveys.
All the abovementioned meta-analyses lack the assessment of response burden. If at all, they merely consider questionnaire length as an aggregate variable without taking the actual complexity of the survey into account. However, it seems clear that the specific effort demanded from the respondents will influence the outcome.
The studies that will be described in the subsequent sections were all conducted at the Institute for Transport Planning and Systems (IVT), ETH Zürich, most involving colleagues of the authors. Thus, we had the opportunity to examine and rate each questionnaire in detail, a benefit that meta-analyses resulting from mere literature reviews evidently do not possess.
A natural experiment
Using a point system for face-to-face interview budgeting of the Zurich-based Gesellschaft für Sozialforschung (Table 1), Ursula Raymann and later the authors rated a series of self-administered surveys (Table 2) of the Institute for Transport Planning and Systems (IVT). As will be shown in this article, the resulting response burden indicator can be used to quickly infer expected response rates. However, the sample size of studies used in the present meta analysis is still quite small, thus not allowing a clear statement on the statistical significance of the relationship between response burden and rate.
The surveys (see Table 1) range from simple and short stated preference (SP)/conjoint surveys, via longer stated response (SR) surveys to extensive surveys of the respondent’s social network or moving behaviour. They form a natural experiment as they were not designed as a survey methods experiment, but arose from the on-going work of the IVT.
All surveys were sent with cover letters on ETH letterhead and included pre-stamped return envelopes to an ETH address. The name and contact details for the respective person in charge were given. The forms were photocopied or laser-printed, if customised for a specific respondent, on good quality paper. The name of the client or of the sponsor of the study was given in the cover letter. The surveys are therefore comparable in their social context and benefit from the credibility of the institution as the most prominent academic institution of the country (see http://www.fc.ethz.ch/facts/ir/rankings for a collection of the relevant rankings).
The range of respondent burden is unusually large as the sample contains both experimental work, especially that on social networks and mobility biographies, as well as quasi-commercial work, for which the response rate and therefore a focus on essential questions are crucial, here the various SP experiments. In a number of cases, the respondents were recruited as part of a computer-aided telephone survey undertaken for the Swiss Federal Railroads by a local market research firm. This prior recruitment will increase response for equal response burdens, as the respondents have shown a willingness to participate. In three cases, subsamples of the respondents were reached by phone for a motivation call explaining the purpose, answering any questions and stressing the importance of the survey to the research projects. For this small sample of self-administered surveys, there is a very strong linear link between the independent ex-ante assessment of respondent burden and the response rate (Figure 1). Two trends are visible (visible in a regression of response rate on response burden, motivation call and prior recruitment, but not reported due to small sample size): the response rate declines with the ex-ante estimate of the response burden and prior recruitment seems equivalent to a motivation call in gaining commitment from the respondents. The authors are not aware of any similar results in the literature.
If these results were to be confirmed with a much larger set of self-administered surveys, it would be a breakthrough in the planning and design of self-completion surveys. It would allow the designer to trade-off detail versus response and would improve the budgeting substantially.
In many other surveys, the response burden varies from respondent to respondent as the number of units to be reported varies between respondents. Prominent examples are trips in travel diary surveys, spells of unemployment in labour market surveys, incidents of sickness, and moves between firms. In this context, these results could be used to estimate the number of non-reported units, which is crucial in these contexts. Still, one should not forget that the content of a survey itself has an impact, as shown for example by the differential response rates to different sets of Stated Preference experiments in a recent Swiss value of travel time savings study (Axhausen et al. 2007), where there were significant differences by task and the preferred mode of the traveller. Still, confirmed results would allow the designer to formulate expectations.
The results presented ask for replication across fields, countries, and survey organizations. Otherwise it will be very difficult to obtain the range of response burdens that is necessary to see its effect in the first place. This will be more than a meta-analysis, as the calculation of the ex-ante response burdens will need to be performed for the first time. We would be happy to undertake this effort if the survey forms and the associated AAPOR response rates were made available to us.
The authors are very grateful for the rating of the first set of surveys by Ursula Raymann, GfS Zürich, and for making the system available to us.