Online Surveys on Mobile Devices
“Survey respondents are increasingly attempting to take surveys on their mobile devices, whether researchers intend for this or not” ((Cazes et al. 2011), p. 2). Approximately 50 percent of US adults own a smartphone (Nielsen 2012; Smith 2012), and approximately 20 percent of US adults own a tablet (Rainie 2012).
These trends have serious implications for online surveys, especially for online surveys that are designed specifically for a computer screen and not modified, or optimized, for the smaller screen typical of a mobile device. In this paper, we present results from tablet, computer, and smartphone administrations of a survey. For each, we examine three measures of survey taking behavior. Our main focus is on surveys taken with tablets and whether tablet survey administration is comparable to computer survey administration. Our results are preliminary, but instructive, since there is currently very little research on tablet administration of online surveys. However, with tablet ownership on the rise, understanding the effects of this survey mode will become exceedingly more important. Just as tablets have served to fill the void between the often difficult-to-read smartphone screen and the difficult-to-transport computer, tablets can also fill the void for mobile survey takers.
Online surveys taken on mobile devices can present problems. Perhaps the most serious is survey breakoff. Previous research on mobile web surveys (typically those not optimized for mobile devices) has reported breakoff rates in the range of 25–70 percent (Callegaro 2010; Callegaro and Macer 2011).
Similarly, Peterson (2012) reports that unintended mobile respondents breakoff twice as often and take 25–50 percent longer to complete online surveys relative to computer respondents. However, his research summary focuses on unintended mobile respondents taking surveys on smartphones, not tablets.
Currently, a very small percentage of respondents (about 1 percent) are taking online surveys on tablets (Callegaro and Macer 2011; Guidry 2012; McClain, Crawford, and Dugan 2012), and very little research exists on tablet administration of online surveys.
In one of the few studies to address this, Guidry (2012) analyzed data from the National Survey of Student Engagement (NSSE), an annual online survey of undergraduate students. In 2012, 3.8 percent of NSSE respondents took the online survey on a smartphone and 0.4 percent took it on an iPad. (No other types of tablets were used.) Guidry found that iPad respondents had similar abandonment rates as computer respondents (and much lower rates than smartphone respondents), similar rates of item-missing data, and similar rates of response non-differentiation (and much lower rates than smartphone respondents).
In this paper, we add to this nascent research by comparing tablet, computer, and smartphone administrations of a survey among a national sample of adults.
One of the original objectives of this study was to test mobile phone surveys versus surveys done on a computer. For the mobile survey, we utilized a smartphone survey app–the Survey on Demand App (SODA), developed by Techneos (a Confirmit company). The survey app has been programmed for all major types of smartphone operating systems, with a separate optimized visual design for each. See Buskirk and Andrus (2012) for a discussion of this app-based smartphone survey approach.
In this study, the same survey was administered to smartphone respondents and online respondents. The questionnaire contained 24 questions on consumer behavior, Internet usage, and TV viewing habits. The survey was designed primarily with mobile app respondents in mind. It featured short questions, short response lists, no grid items, minimal need for vertical scrolling, and was relatively short.
The survey was fielded to a large, national sample of online panelists from KnowledgePanel®–the probability-based, online panel maintained by Knowledge Networks (a GfK company). For the mode effect research being conducted, the sample was restricted to smartphone users (to avoid confounding survey mode with respondent characteristics). Panelists were pre-screened 1 week prior to the survey. Of the 2443 eligible smartphone users, 1254 were randomly assigned to take the survey on their smartphone, via the mobile app, and 1187 were randomly assigned to take the survey online, on a computer (as they usually do).
Those assigned to the mobile app mode were emailed instructions to download and install the survey app to their smartphone and were provided a survey code to start the survey. This second step was taken to ensure that only those assigned to the mobile app survey could access it. Those assigned to the online mode were sent email invitations which contained a link to the survey and were instructed to complete the survey on a PC or laptop. A total of 732 panelists responded to the mobile app survey and 725 responded to the online survey, representing survey participation rates of 58 percent and 61 percent, respectively.
We received a total of 705 completed mobile app surveys and 711 completed online surveys. Tables 1 and 2 present the modes and platforms actually used to complete the survey. Among those randomly assigned to the online mode, 128 of the panelists completed the survey on a smartphone, rather than on a computer (as instructed). We also identified 33 unintended mobile respondents who completed the survey with a tablet, and more specifically, an iPad. No other types of tablets were used to take the survey.
These panelists accessed the survey by opening the email invitation on their mobile device. Fortunately, among the paradata collected, we collected user agent string, which identifies the type of browser and device used to access the survey.
As shown in Table 2, the majority of the unintended mobile respondents completed the survey with Android and iPhone smartphones. This occurred despite the fact the survey was not optimized and intended for smartphone mobile web administration. This situation is described by Buskirk and Andrus (2012) as the “passive-mobile browser survey approach” and it entails many disadvantages.
In terms of demographic characteristics, iPad respondents were significantly more likely than other respondents to have at least a Bachelor’s degree, household income of at least $75,000, to be married, and to be homeowners. Not surprisingly, 39 percent report they primarily use a tablet to access the Internet (compared to 15 percent of others).
On the other hand, the 128 unintended smartphone respondents were significantly more likely than other respondents to be young, female, to reside in larger households, and to access the Internet primarily with their smartphone (61 percent vs. 26 percent of others).
Presented in Figure 1A-1D are screenshots taken from smartphone, tablet, and computer administrations of the survey.
In our analysis, we examine three measures of survey taking behavior – breakoff rates, survey completion times, and item-missing data – among tablet respondents, computer respondents, and smartphone respondents.
As shown in Table 3, breakoff rates for the survey were quite low, across all modes and platforms. However, the breakoff rates for the mobile web respondents were noticeably higher, consistent with findings reported by Peterson (2012). Within this group of unintended mobile respondents, the breakoff rate for iPad respondents was about half of that for Android and iPhone smartphone respondents, consistent with findings from Guidry (2012).
Test for differences in breakoffs by survey administration mode and platform, we estimated a logistic regression equation. This multivariate analysis allows us to predict the odds of breakoff by mode and platform while statistically controlling for the demographic characteristics of respondents. Consistent with the patterns displayed in Table 3, the regression results reveal that mobile app respondents and smartphone web respondents (both Android and iPhone respondents) were significantly more likely to breakoff than computer respondents. On the other hand, there was no significant difference in the odds of breakoff between iPad respondents and computer respondents.
Survey Completion Time
Summary statistics for survey completion times are presented in Table 4. In general, respondents completed the survey in a median time of about 5.5 minutes. This was shorter than anticipated for a 24-question survey, but in part, can be explained by the use of short questions and short sets of response options. Extreme outliers inflate the values of means and standard deviations, but they are presented for the sake of the interested reader.
Focusing on median completion time, mobile web respondents required much more time than others to complete the survey, consistent with findings from Peterson (2012). However, iPad respondents completed the survey in a median time of 5.1 minutes. In contrast, those completing the mobile web survey using Android and iPhone smartphones took much longer – 8.0 minutes and 10.2 minutes, respectively. This is not surprising given that the survey was not optimized for smartphone web administration. The questions and text appeared very small on a smartphone browser. Reading the small print, zooming, and selecting among small radio buttons and check boxes required more time (and increased respondent burden).
To test for differences in survey completion time by mode and platform, we estimated an ordinary least squares regression equation. Compared to computer respondents, survey completion time is significantly longer among Android and iPhone mobile app respondents (but not among BlackBerry respondents). Similarly, regression results reveal that Android and iPhone mobile web respondents take significantly more time to complete the survey than computer respondents. However, once again, significant differences were not uncovered between iPad respondents and computer respondents.
Finally, we consider item non-response across the different survey modes and platforms. Presented in Table 5 are percentages of respondents who skipped at least one question in the survey.
With the computer and mobile app administrations, approximately 10–13 percent of respondents did not respond to at least one item in the survey. The percentage is about double among BlackBerry respondents, despite the fact that the mobile app survey was also optimized for BlackBerry devices.
With the mobile web administration, approximately 8–10 percent of respondents did not respond to at least one question in the survey. Interestingly, Android mobile web respondents were much less likely to skip survey items than others, although it is not clear why.
Again, we estimate a logistic regression equation, in this case, to predict the odds of skipping at least one question in the survey. Controlling for demographic factors, BlackBerry respondents were significantly more likely than computer respondents to not answer at least one survey question. However, no other significant differences in item non-response by mode or platform were uncovered, consistent with findings from Guidry (2012).
Based on the descriptive and multivariate analyses across the three measures examined, tablet survey administration appears to be comparable to computer survey administration. Across each measure, differences in survey taking behaviors were small and were not statistically significant, consistent with findings from Guidry (2012).
At the same time, with two of the measures – breakoff rates and survey completion time – we consistently uncovered differences between smartphone administration and computer administration. Not surprisingly, differences are more pronounced among smartphone web respondents.
These are intriguing, but preliminary results and conclusions, as they are based on a small and self-selected group of tablet respondents. In addition, our results only apply to iPad respondents, since no other types of tablets were used to complete the survey. Still, this provides initial evidence that tablets can fill the void between traditional online surveys and those taken on a mobile device. While tablets are not as widely-used as smartphones, they share some of the common characteristics, such as portability and touchscreen design. These features can be leveraged on the tablet when designing surveys.
However, more research needs to be done to understand fundamental behavioral differences of people when they use smartphones and tablets. For example, while tablets are no doubt more portable than most computers, they likely are not used in the same way that smartphones are used. Instead, tablet usage might take on more characteristics of traditional online surveys. That is, they might be more commonly used when the respondent is seated, focused, and single-tasking, rather than the on-the-go, multi-tasking behaviors of smartphone users. Understanding the key differences and similarities of smartphone behaviors and tablet behaviors will play a critical role in survey design for both modes.
Furthermore, tablets are not often considered to be cellular devices, like smartphones. Although some tablets have cellular capabilities, they are not used as a communication device in the same way a smartphone is. Voice calling and text messaging on tablets are not as common. This distinction further differentiates smartphones and tablets and contributes to the differences in how both are used.
Thus, we encourage additional research on tablet survey administration. Currently, large-scale online surveys (100,000+ respondents) may yield enough tablet respondents to make more firm conclusions despite the low tablet penetration worldwide. Tablets provide a unique niche between smartphones and personal computers, and this research is an early attempt to better define how tablets can be used as survey tools.
A total of 25,221 active panelists were sent the smartphone screener survey. A total of 10,156 responded over a 2-day period. Of those, 2,443 were identified as smartphone owners and those willing to complete a survey on their smartphone.
To account for the pronounced positive skew in survey completion time, we used the natural logarithm of completion time as the dependent variable. In addition, for the OLS analysis, weremoved outliers – the 5 percent completing the survey in less than 3.0 minutes and the 5 percent completing the survey in more than 26.3 minutes.
Note that this is a respondent-level measure, not a question-level measure of item non-response. Item non-response across each of the 24 questions was 1 percent or less. This is true across all modes and platforms.