Loading [Contrib]/a11y/accessibility-menu.js
Skip to main content
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Blog
  • Subscribe
  • search

RSS Feed

Enter the URL below into your favorite RSS reader.

http://localhost:22663/feed
Articles
Vol. 11, Issue 2, 2018April 09, 2018 EDT

Are Embedded Survey Items the Solution to Low Web Survey Response Rates? An Investigation of the Interaction Between Embedded Survey Items and Time of Survey Administration

Lebena Varghese, Raeal Moore, Benjamin Earnhart,
survey research best practices
https://doi.org/10.29115/SP-2018-0012
Survey Practice
Varghese, Lebena, Raeal Moore, and Benjamin Earnhart. 2018. “Are Embedded Survey Items the Solution to Low Web Survey Response Rates? An Investigation of the Interaction Between Embedded Survey Items and Time of Survey Administration.” Survey Practice 11 (2). https:/​/​doi.org/​10.29115/​SP-2018-0012.
Save article as...▾
Download all (4)
  • Figure 1 Email invitation for Condition 1: first question embedded.
    Download
  • Figure 2 Email invitation for Condition 2: traditional email.
    Download
  • Figure 3a Interaction effect between test time periods and invitation message on survey completion rate.
    Download
  • Figure 3b Interaction effect between test time periods and invitation message on survey completion rate.
    Download

Sorry, something went wrong. Please try again.

If this problem reoccurs, please contact Scholastica Support

Error message:

undefined

View more stats

Abstract

Web surveys provide survey researchers an opportunity to survey and gather data in an inclusive manner and from a global audience. However, Web surveys typically suffer from low response rates. To combat this issue, numerous studies have investigated how invitation messaging can improve responses. There is only one study, to our knowledge, that has studied the impact on response rates of embedding the first survey item in the email invitation message. Furthermore, no studies, to our knowledge, have investigated whether such an email invitation would augment the response rates during time periods that are less salient to the population under investigation. Our study tested a 2 (invitation message) x 6 (the day the invitation was sent) experimental design. The results show individuals who received the email invitation with the embedded survey item were more than two times more likely to start the survey and almost one-and-a-half times more likely to complete the survey than those who only received the traditional email invitation. Providing the embedded invitation improved response rates on test days more so than on days that were less salient to the respondents. Practical implications and suggestions for future research are discussed.

Introduction

Survey researchers are particularly interested in investigating methods of administering surveys and optimizing the rate at which people respond to them. In the past, surveys were administered using different methods such as telephone, postal mail (paper form), and face-to-face administration sessions. Although these methods are still in use, research illustrates that Web surveys are the predominant mode of administering surveys (COUPER, TRAUGOTT, and LAMIAS 2001; Dillman, Smyth, and Christian 2014). This trend is not surprising, as 40% of the world’s population and 88.1% of the population in the United States have access to the Internet today (internetlivestats.com). Web surveys provide researchers an opportunity to survey and gather data in an inclusive manner and from a global audience (Wyatt 2000).

Past research has shown that Web surveys tend to have several advantages over conventional surveying methods, such as shorter durations with regard to delivery of surveys, simplified data collection, and data entry. Researchers are able to explore a variety of survey design options by using Web survey software, such as Qualtrics or SurveyMonkey. They are no longer restricted to designs that are only conducive to paper-pencil formats (Couper 2000; Fricker and Schonlau 2002; Sheehan and McMillan 1999). Furthermore, recent research shows that Web surveys have an advantage over interviewer-administered surveys with regard to collecting sensitive data (e.g., sexual attitudes). Participants were more likely to report sensitive behaviors when surveyed through a Web survey than when surveyed through other survey methods (Burkill et al., n.d.).

Despite these advantages, Web surveys typically suffer from low response rates. A meta-analysis on survey response rates illustrated that Web survey response rates were 11% lower than response rates of other survey methods (Manfreda et al. 2008). Low response rates can be the Achilles’ heel of Web surveys, thereby adversely impacting the quality of the data they gather. Precisely, low Web survey response rates tend to compromise the power of statistical tests because researchers may not have enough participants in each condition to detect an effect. Furthermore, low response rates may reduce our confidence in the generalizability of results, as respondents may differ in important ways from those who do not respond. Hence, the effects we observe could be statistical artifacts (e.g., range restriction) and not true effects.

As ensuring an adequate response rate is an important element for maintaining the quality of the data gathered through Web surveys, this paper examines the efficiency of a particular strategy in enhancing Web survey response rates. Specifically, this paper investigates whether embedding the first survey question in the email invitation message increases response rates of Web surveys. To our knowledge there are only a couple of experimental studies that have examined this effect (Liu and Inchausti 2017; Vannette 2016). Both of these studies reported that having an embedded survey item in the email invitation increased the rates at which participants both started and completed the survey. Our study is a direct attempt to extend and replicate this finding.

Research shows that when an issue or event is salient to a population and when survey items remotely or directly tap into the salient issue or event, participants are more likely to respond to a survey (Sheehan and McMillan 1999). A salient event for one of our populations of interest, high school students, is taking the ACT test. A recent experimental study revealed that students who were either surveyed before or after ACT test day responded at a lower rate (15.3%–22.5%) than those who were surveyed on the day they took the ACT test. In other words, students were more likely to respond to Web surveys related to the ACT on a day that ACT was already salient to them rather than on days when ACT was not at the forefront of their minds (Moore 2016). Drawing from Liu and Inchausti’s (2017) findings, we postulated that embedding the first question of the survey within the email invitation would augment the response rates during time periods that are less salient to our population. In other words, we postulate that inserting an embedded question/item in the email invitation would act as an intervention that would alleviate the low response rates witnessed during less salient time periods.

Research questions

  1. Does embedding the first survey question in an email invitation increase response rates for Web surveys?

  2. Can having an embedded survey item in the email invitation boost response rates when students are surveyed during time periods that may be less salient?

Methods

An online survey was administered via Qualtrics to a disproportionate stratified random sample of students (N = 45,000) who had registered for the April 2016 ACT test. Students who self-reported as Asian, Hispanic, or African American were oversampled. White students, those students who chose two or more races, and those students who did not answer the race/ethnicity question were undersampled. All American Indian/Alaska Native and Native Hawaiian/Other Pacific Islander students were included. This sampling method was conducted to warrant a sufficient number of respondents in each racial group to conduct analysis for an applied research study. Students’ self-reported race was obtained from ACT’s database of students who registered for the ACT test.

Students were randomly assigned to experimental groups using a 2 (invitation message) x 6 (the day the invitation was sent) research design. The invitation message included two condition groups: namely, invitation email with embedded item vs. traditional email invitation. In Condition 1, the email acknowledged that the student had recently registered to take the ACT test and asked him or her to participate in a Web survey. The invitation then asked students to start the survey by clicking on their answer to the first survey question (Figure 1). This is a filter question we ask at the beginning of each survey administered to students. Although most email addresses that are provided at the time of registration belong to students, occasionally a school counselor or parent will register a student for the test and inadvertently include their own contact information and not the student’s. Once participants clicked on the option that best represented them, they were redirected to the online survey, which already contained their answer to the first question. All participants who did not choose “I am a student registered to take the ACT” were exited from the survey.

Participants who were randomly assigned to Condition 2 received the traditional email invitation. This message included the same short narrative acknowledging participants had recently registered for the test and it asked them to participate in a Web survey. However, the first question was not embedded in the invitation message. Instead, participants were asked to click on a URL customarily provided by Qualtrics (Figure 2). Once participants clicked on the link, they were redirected to the online survey.

Figure 1 Email invitation for Condition 1: first question embedded.
Figure 2 Email invitation for Condition 2: traditional email.

Within each of these two conditions (embedded item invitation vs. traditional invitation), participants were randomly assigned to receive the invitation on different days (Table 1). Conditions were determined based on the distance in time from the most salient experience students had with ACT as an institution, which is the day students took the ACT test. However, all invitations were sent on a Saturday at 3:00 pm so as to control for the day of the week and time of day the invitation message was sent. This was done to determine if embedding the first survey item in the invitation had a differential impact on response rates, depending on when the invitation message was sent.

Table 1 Timing of the invitation message.
Timing of the invitation message Date
Time 1: Five weeks before the test March 5, 2016
Time 2: Four weeks before the test March 12, 2016
Time 3: Three weeks before the test March 19, 2016
Time 4: Test day April 9, 2016
Time 5: One week after the test April 16, 2016
Time 6: Three weeks after the test April 30, 2016

A reminder message was sent seven days after the initial invitation. It was identical to the initial invitation with the exception of a short sentence acknowledging that it was a friendly reminder to participate in the survey. The survey closed seven days after the reminder message was sent. All experimental groups were directed to the same Web survey. There were, however, two skip-logic patterns. The second survey item asked students to identify their race/ethnicity. If the student chose Hispanic, Asian, or Pacific Islander they were asked to indicate a more detailed racial category. Likewise, the subsequent items asked them to rank the top three languages most commonly spoken in the home. If a language other than English (ELL) was selected, five additional language questions were asked. All participants were asked to answer seven items about the type of school they attended.

Results

Our analyses involved two dependent variables, namely survey started/opened indicator and survey completion. The survey started/opened variable specifies whether a participant answered the first, embedded question or clicked on the survey link. We define survey completion as response to 80% or more of the survey. Table 2 summarizes the number of respondents based on these two dependent variables.

Table 2 Number of participants who started the survey and completed 80% or more of the survey in each condition.
Timing of invitation First question embedded Traditional email
Clicked on embedded survey item Clicked on embedded survey item (%) Completed 80% or more of the survey Completed 80% or more of the survey (%) Clicked on survey link Clicked on survey link (%) Completed 80% or more of the survey Completed 80% or more of the survey (%)
T1- Five weeks before the test 1,293 62.43 681 55.01 778 37.57 557 44.99
T2- Four weeks before the test 1,299 64.85 664 55.52 704 35.15 532 44.48
T3- Two weeks before the test 1,297 63.55 667 53.70 744 36.45 575 46.30
T4- Test day 1,490 64.61 829 57.61 816 35.39 610 42.39
T5- One week after the test 1,425 64.80 756 57.93 774 35.20 549 42.07
T6- Three weeks after the test 1,004 65.84 533 55.75 521 34.16 423 44.25
Total 7,808 4,130 4,337 3,246

Note: The number of participants assigned to each of the experimental groups was held constant i.e., N = 7500.

We also created two control variables that indicated whether some of the participants received additional survey items. Participants who either revealed that they spoke more than one language at home (e.g., ELL) or who identified themselves as Asian, Hispanic, or Pacific Islander received additional survey items. We controlled for these variables because one could argue that receiving additional survey items could impede completion rates. Hence, it was imperative to determine whether having an embedded survey item within the email invitation had a positive effect on completion rates despite controlling for the fact that some participants received additional survey items. Our independent variables were the invitation message condition participants were assigned to and the length of time between test day and the day they were surveyed (see Table 3).

Table 3 Logistic regression model predicting survey completion.
Predictors b SE Odds ratio
Control variables
English as second language items (0,1) 2.63** 0.05 13.88
Additional ethnicity items (0,1) 2.49** 0.04 12.11
Independent variables
Embedded item (0,1) 0.28** 0.07 1.33
T1- Five weeks before the test -0.02 0.07 0.97
T2- Four weeks before the test -0.11 0.07 0.88
T3- Two weeks before the test -0.06 0.07 0.93
T5- One week after the test -0.09 0.07 0.90
T6- Three weeks after the test -0.32** 0.07 0.72
Interaction terms
Embedded item X T1 -0.25* 0.10 0.77
Embedded item X T2 -0.17 0.10 0.83
Embedded item X T3 -0.22* 0.10 0.80
Embedded item X T5 -0.02 0.10 0.97
Embedded item X T6 -0.11 0.10 0.89

Note: **p <.001; *p < .05. English as second language items (0 = did not receive these extra set of survey items, 1 = received extra set of items). Additional ethnicity items (0 = did not receive these extra set of survey items, 1 = received extra set of items). Embedded items (0 = did not receive embedded item in e-mail invitation, 1 = received embedded item in e-mail invitation). Survey completion (0 = completed less than 80% of the survey, 1 = completed 80% or more of the survey). T1, T2, T3, T5, and T6 are all dummy coded. In each of these variables, 0 = surveyed on test day.

In order to address the research questions, we carried out a series of logistic regressions. To answer our first research question, we regressed the started/opened indicator variable on the invitation variable while including the control variables in the model. The analysis revealed that the main effect of receiving an embedded survey item in the email invitation on starting/opening the survey was significant, b = 0.83, p <.001. Precisely, individuals who received the email invitation with the embedded survey item were 2.31 times more likely to start the survey than those who only received the traditional email invitation.

Similarly, to investigate whether embedding the first item in the email invitation increased the survey completion rate, we regressed the survey completion variable on the embedded item variable while controlling for whether additional ELL and/or country-of-origin items were presented. The logistic regression analysis revealed that this main effect was significant, b = 0.28, p <.001. Specifically, participants who received an embedded survey item in their email invitation were 1.33 times more likely to complete 80% or more of the survey as compared to those who did not receive an embedded survey item in their email invitation. As survey completion is of greater consequence than survey start rates from both business and research perspectives, the following analyses were carried out using survey completion as the outcome variable.

We were interested in examining whether being exposed to an embedded survey item in the email invitation would provide a boost in survey completion rate when participants are surveyed during time periods that are distant (a few weeks before or few weeks after) from the ACT test day. From past research within our organization (Moore 2016), we are aware that the response rate drops when we survey students before or after test day. Hence, we were interested in whether the presence of an embedded survey item would attenuate the negative relationship between less salient time periods and completion rate. To garner evidence for this research question, we created five dummy-coded variables. In each of these dummy-coded variables, participants who were surveyed on the day of the ACT test served as the reference group (Table 3). We also created five interaction terms—each of the time variables were multiplied separately with the embedded survey item variable.

The results revealed that all but one of the main effects of the time variable were not significant (Table 3). However, the results were in the expected direction. Participants who were surveyed at a time period that was before or after test day were less likely to complete 80% or more of the survey. Among the various interaction terms that were tested, only two of them were significant (Table 3). Precisely, the positive effect of having an embedded survey item in the email invitation on survey completion was 1.28 times higher for participants who were surveyed on test day than for participants who were surveyed five weeks before the test (Figure 3a). Similarly, the positive effect of having an embedded survey item on completion of the survey was 1.25 times higher for participants who were surveyed on test day than those who were surveyed two weeks before the test (Figure 3b).

Figure 3a Interaction effect between test time periods and invitation message on survey completion rate.
Figure 3b Interaction effect between test time periods and invitation message on survey completion rate.

Discussion

Survey response rate is considered a survey quality indicator by many researchers. Identifying optimal methods to increase survey response rates was one of the main objectives of this study. Our results are consistent with past research (Liu and Inchausti 2017); having an embedded survey item within the invitation message did indeed increase the likelihood of participants starting the survey and completing the majority of the survey. Many mechanisms could account for the positive effect the embedded question has on survey response rates. For instance, the embedded question could be a researcher’s way of getting his or her foot in the door and making a request to participants to answer at least one survey item. Past research shows that when individuals comply with an initial request they are more likely to comply with similar larger requests (Petrova, Cialdini, and Sills 2007). In this instance, participants who answered the embedded item may have wanted to appear consistent and thereby decided to comply with a similar larger request which is to complete the survey. Another plausible mechanism that could account for higher completion rates is that those who received an embedded item in their survey invitation encountered fewer hurdles to participate in the survey compared to those who had to click on an additional link to participate in the survey. This proposition has received some empirical support. Past research suggests that reducing the barriers towards survey participation results in enhanced survey completion rates (Vannette 2016).

In addition to replicating this finding, we examined whether having an embedded survey item within the invitation message would increase response rates during periods that are characterized by relatively low response rates. Our results suggest that having an embedded item within the email invitation does not improve the response rates when participants are surveyed during time periods that are not salient to them. Although we had hoped that an embedded survey item would mitigate low response rates during less salient time periods, participants who received an embedded item did not respond to the survey at a higher rate compared to those who did not receive the embedded item. Nonetheless, the study results reveal that if researchers would like to increase response rates during time periods that are salient to the participants, they should consider embedding the first survey item in their email invitations.

The results from this study suggest that although embedding a survey item within the invitation may increase response rates, such an increase may not be evident in circumstances where response rates are particularly low. For example, future research should examine whether embedding a survey item within the invitation can boost survey responses when the survey is long, rather than short, or when the survey is about a topic that is less interesting to participants, rather than more interesting.

References

Bachmann, D.P., J. Elfrink, and G. Vazzana. 1999. “Email and Snail Mail Face off in Rematch.” Marketing Research 11 (4): 10–15.
Google Scholar
Burkill, S., A. Copas, M.P. Couper, S. Clifton, P. Prah, J. Datta, F. Conrad, K. Wellings, A.M. Johnson, and B. Erens. n.d. “Using the Web to Collect Data on Sensitive Behaviours: A Study Looking at Mode Effects on the British National Survey of Sexual Attitudes and Lifestyles.” PLoS One 11 (2): 1–2.
Google Scholar
COUPER, M. P., M. W. TRAUGOTT, and M. J. LAMIAS. 2001. “Web Survey Design and Administration.” Public Opinion Quarterly 65 (2): 230–53. https:/​/​doi.org/​10.1086/​322199.
Google Scholar
Couper, M.P. 2000. “Web Surveys: A Review of Issues and Approaches.” Public Opinion Quarterly 64 (4): 464–94.
Google Scholar
Dillman, D.A., J.D. Smyth, and L.M. Christian. 2014. Internet, Phone, Mail, and Mixed-Mode Surveys. Hoboken, NJ: Wiley.
Google Scholar
Fricker, Ronald D., and Matthias Schonlau. 2002. “Advantages and Disadvantages of Internet Research Surveys: Evidence from the Literature.” Field Methods 14 (4): 347–67. https:/​/​doi.org/​10.1177/​152582202237725.
Google Scholar
Liu, M., and N. Inchausti. 2017. “Improving Survey Response Rates: The Effect of Embedded Questions in Web Survey Email Invitations.” Survey Practice 10 (1): 1–6.
Google Scholar
Manfreda, K.L., M. Bosnjak, J. Berzelak, I. Haas, and V. Vehovar. 2008. “Web Surveys versus Other Survey Modes: A Meta-Analysis Comparing Response Rates.” Journal of the Market Research Society 50 (1): 79–104.
Google Scholar
Moore, R. 2016. “How the Timing of Survey Invitations Affects Student Response Rates.” 2016. http:/​/​www.act.org/​content/​dam/​act/​unsecured/​documents/​R1605_How-the-Timing-of-Survey-Invitations-Affects-Student-Response-Rates.pdf.
Petrova, P.K., R.B. Cialdini, and S.J. Sills. 2007. “Consistency-Based Compliance across Cultures.” Journal of Experimental Social Psychology 43 (1): 104–11.
Google Scholar
Sheehan, K., and S.J. McMillan. 1999. “Response Variation in E-Mail Surveys: An Exploration.” Journal of Advertising Research 39 (4): 45–54.
Google Scholar
Vannette, D.L. 2016. “Does Embedding a Survey Question in the Survey Invitation E-Mail Affect Response Rates? Evidence from a Randomized Experiment.” In Paper Presented at the 71st Annual Conference of the American Association for Public Opinion Research. May 12--15. Austin, TX.
Google Scholar
Wyatt, J. C. 2000. “When to Use Web-Based Surveys.” Journal of the American Medical Informatics Association 7 (4): 426–30. https:/​/​doi.org/​10.1136/​jamia.2000.0070426.
Google ScholarPubMed CentralPubMed

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Powered by Scholastica, the modern academic journal management system