Background
E-mail is the most common invitation mode for Web surveys, because it is cheap, easy to automate and personalize, and provides easy access to the survey (delivery of a unique clickable URL). But there are some concerns with this invitation mode, including low legitimacy, concerns about spam, and issues of churn (maintaining e-mail addresses over time) [see Couper (2008) for a discussion of e-mail versus alternative invitation modes]. Moreover, many sample frames do not contain e-mail addresses, and soliciting such addresses in another mode may raise privacy concerns. For this reason, some surveys (e.g., the Health and Retirement Study) do not collect e-mail addresses and instead send mailed invitations to supplemental Web surveys.[1]
The literature on mail versus e-mail invitations to Web surveys is both limited and mixed. Birnholz et al. (2004) found that a mailed invitation (with an incentive) to faculty yielded a higher response rate than an e-mailed invitation with incentive (40 percent vs. 32 percent), though the difference was not statistically significant. In a survey of faculty, staff, and students, Kaplowitz et al. (2012) tested a postcard versus e-mail invitation, and found a significantly higher response rate among students for the postcard (22 percent vs. 19 percent), but a significantly lower response rate among faculty (33 percent vs. 40 percent) and no difference for staff (43 percent for each group). Millar and Dillman (2011) compared a mailed and e-mailed invitation in a student survey. In the no-incentive condition, the letter did not significantly affect response rates (21.2 percent vs. 20.5 percent). In another faculty survey, Dykema et al. (2013) found a mailed invitation significantly improved responses rate over an e-mailed invitation (30.1 percent vs. 19.4 percent), with no incentive in either condition.
In our two previous studies in 2008 and 2010 (Bandilla et al. 2012; Bosnjak et al. 2013), both conducted among general population samples in Germany. We found that a mailed letter (pre-notice or invitation) to a Web survey was more effective than e-mail alone (2008: 57 percent vs. 43 percent; 2010: 51 percent vs. 40 percent). Another finding from our 2010 study was that only 45 percent of in-person respondents who reported using the Internet provided their e-mail addresses. For cost reasons, invitations to the follow-up survey were sent by e-mail only. This meant that more than half (55 percent) of eligible Internet users were not invited to the Web survey. This raises the question whether it is worth trying to solicit e-mail addresses for Web surveys where they do not already appear on the frame. The 2010 Web follow-up survey also excluded those without Internet access, potentially further increasing selection bias.
Given all this, in our 2012 study, we experimentally tested the effect of asking for e-mail addresses in the main survey. We also explored the effect on sample representation of including a mail survey for those without Internet access or as a follow-up to nonrespondents, relative to a Web-only survey. We thus have two research objectives:
- To test the effectiveness of asking (or not) for an e-mail address and then sending a mailed invitation to a Web survey
- To test the effectiveness and representativeness of a mixed-mode procedure comparing Internet users and non-users, using a mailed invitation to a Web survey and a mailed reminder with a paper questionnaire
Method
The study was designed as a follow-up to the 2012 German General Social Survey (ALLBUS).[2] The ALLBUS is based on a random sample of German-speaking adults, with persons randomly selected from the community registers. Data collection is by in-person interviews, using computer-assisted personal interviewing (CAPI). The 2012 ALLBUS achieved a response rate of 37.6 percent.
The Internet access status of all ALLBUS respondents was ascertained. All respondents were asked if they were willing to participate in a follow-up survey. (This is a requirement of the German data protection laws.) Among those with Internet access and who reported willingness to do a follow-up survey, a random 1/3 was asked for their e-mail address, while the remaining 2/3 was not asked this question. This process (outlined in Figure 1) yields four groups of willing respondents:
- Those with Internet access who were not asked for an e-mail address
- Those with Internet access who were asked for an e-mail address and provided it
- Those with Internet access who were asked for an e-mail address and did not provide it
- Those without Internet access
For cost reasons, we could not follow up all willing ALLBUS respondents. We thus drew random subsamples from each of the four groups, in two stages. Our analysis focuses on a random subset of 250 cases from each of the four groups. All four groups were sent an invitation by mail to a web survey; this was followed by a mailed reminder which included a paper questionnaire. The Web/mail survey included a subset of items from the ALLBUS to compare differences in key measures. For the combined analyses of the change in key variables with the addition of mail, we weighted cases by the inverse of the selection probabilities into these subsets.
Results
Overall, 73 perent of the ALLBUS respondents reported having Internet access (see Figure 1). While 71 percent of ALLBUS respondents expressed willingness to do a follow-up survey, those with Internet access were more willing to do so (85 percent) than those without (62 percent). This may reflect socioeconomic and demographic differences between these two groups. While the overall level of Internet access has increased over prior waves of the ALLBUS (54 percent in 2008 and 66 percent in 2010), the level of willingness to do a follow-up survey has declined somewhat (80 percent of Internet users in 2010 were willing).
Among those with Internet access and randomly assigned to be asked for an e-mail address, 42.4 percent provided one. This is a similar level to that found in 2010, suggesting that fewer than half of willing Internet users in Germany are willing to provide an e-mail address to a survey interviewer. Given this, is it worth asking for e-mail addresses? Are those who decline to provide such information less willing to participate in a follow-up survey? This is the focus of our first research question.
Table 1 shows the Internet and overall response rate for each of the four groups. Focusing on the first column (online only), we see that those who were asked and provided an e-mail (group B) responded at a slightly (but not significantly) higher rate than those who were asked and did not provide an e-mail (group C), when sent a mailed invitation to a Web survey (27.2 percent vs. 22.0 percent, χ2 (1)=1.82, n.s.). However, we see that those not asked for their e-mail address (group A) had a response rate of 19.2 percent to the online survey. This is significantly different from group B (19.2 percent vs. 27.2 percent, χ2 (1)=4.49, p=0.034) but not from group C (19.2 percent vs. 22.0 percent, χ2 (1)=0.59, n.s.). While the weighted response rate for groups B and C combined (24.2 percent) is higher than that for group A (19.2 percent), this difference is also not significant (χ2 (1)=2.56, n.s.). This suggests – contrary to our initial concerns – that there seems to be no harm in asking for e-mail addresses, even if not everyone is willing to provide one. Also note from Table 1 that 3.2 percent of those who reported not having Internet access responded to the online survey (group D).
Our second research question focuses on the value of including a mail option, in contrast to a Web-only survey. While doing so increases costs, it is likely
- to increase response rates, and
- to reduce coverage biases associated with only surveying those with Internet access.
The answer to the first of these questions is clear from Table 1: the overall weighted response rate increases from 17 percent to 56 percent. In terms of case count, the number of completed surveys more than tripled, increasing from 179 to 568. Further, offering the mail follow-up also served to reduce the disparities in response rates across the four groups. This is especially noticeable for group D (those without Internet access). Although they made up 27 percent of the ALLBUS population, not surprisingly, they comprise only 4.1 percent of the total weighted number of online respondents. But once the mail follow-up is offered, this group comprises 21.7 percent of the final weighted set of respondents, more in line with the ALLBUS proportion.
Of course, it is not just the increased response (yielding more cases for analysis) that is important, but whether and to what extent these additional respondents bring the sample more in line with the population than an online-only survey, i.e., reduce the selection bias that may arise through noncoverage or nonresponse. To examine this, we repeated selected questions from the ALLBUS in our Web and mail follow-up survey, permitting us to compare both demographic and attitudinal measures to the ALLBUS “population.” The full set of comparisons is presented in Table 2. We highlight a few of the findings here for illustrative purposes. Note that we do not test for significant differences, given the overlapping nature of the samples. The estimates are weighted to reflect the differential selection probabilities into the four groups, and the standard errors reflect the design, using the pweight command in Stata (StataCorp LP, College Station, TX, USA).
In terms of demographic variables, the additional of the mail cases brings the follow-up sample more in line with the ALLBUS distributions than the Web-only group. For example, 43 percent of the Web-only respondents report a university entrance qualification, compared to 27 percent of the Web+mail respondents and 24 percent of the ALLBUS respondents. Similarly, the age and gender distributions of the follow-up sample including mail are closer to the ALLBUS distributions.
When comparing the responses to attitude and opinion measures, we note that mode effects could account for some of the differences, given that the follow-up survey was self-administered while ALLBUS was interviewer administered. Nonetheless, we see a clear gradient, with the addition of the mail responses bringing the distributions more in line with the ALLBUS responses. For instance, 23 percent of Web respondents agree with the statement that “with the future looking as it does, it’s almost irresponsible to bring children into the world,” compared with 31 percent of Web+mail respondents and 37 percent of ALLBUS respondents. Similarly, 53 percent of Web respondents endorse the statement “most people don’t care in the slightest what happens to others,” while 58 percent of Web+mail respondents and 72 percent of ALLBUS respondents do so. We similarly see increasing levels of distrust in others and decreasing levels of interest in politics as we move from the Web respondents only to Web+mail, and finally to ALLBUS respondents. In terms of abortion attitudes, ALLBUS respondents interviewed face-to-face show lower levels of support for abortion under a variety of circumstances, while Web-only respondents show the highest levels of support.
Discussion
We set out to address two research questions. The first focused on the efficacy of collecting e-mail addresses for a Web follow-up survey. Despite the fact that fewer than half of the respondents are willing to provide an e-mail address, doing so does not appear to harm subsequent response rates. That is, those who are asked for an e-mail address but declined to provide one did not respond at a lower rate than those not asked for an e-mail address. Overall, those asked (whether or not they provided an e-mail) responded at a slightly higher rate to a mailed invitation, whether looking at Web-only responses or at Web+mail responses.
Our second research question addressed the utility of using a sequential Web+mail design rather than Web-only follow-up survey. We find evidence that doing so significantly increases responses rates (from 17 percent to 56 percent overall) but also brings response distributions more in line with the ALLBUS responses. Not only does this bring in those without Internet access, but it also brings in substantial numbers of respondents who reported having Internet access but who did not respond to the Web survey invitation. While not suggesting that the ALLBUS is a “gold standard,” our results do suggest that restricting the follow-up survey to those with Internet access may produce distributions that deviate substantially from the core ALLBUS results. Again, this points to the value of including a mail component to a Web survey.
In contrast, the U.K. Household Longitudinal Survey (or Understanding Society) sends mailed invitations to everyone and e-mail to those who provided e-mail addresses.
See http://www.gesis.org/en/allbus/study-profiles/2012/ for further information.