Introduction and Research Questions
The widespread use of Internet-enabled mobile devices, such as smartphones and tablets, is impressive: 58 percent of U.S. adults own a smartphone and 42 percent have a tablet (Pew Research Center 2014). Mobile devices are spreading widely across Europe, too: in 2012, 40 percent of the German population were using a smartphone, and 15 percent a tablet (IAB Europe 2012). Therefore, self-administered online mobile surveys have become a reality and seem a promising mode for data collection. Related opportunities include the possibility to reach subjects virtually anywhere and at any time, to further accelerate data collection, and to facilitate contact with hard-to-reach individuals (Fuchs 2008; Toepoel and Lugtig 2013). Typical devices also allow rich data and paradata – video, audio, position – to be collected (Buskirk and Andrus 2012).
Several related methodological challenges have also emerged. Coverage is currently a problem, as many individuals still do not use these devices. Other challenges already affect current Web surveys, as a not-negligible share of respondents spontaneously answers through mobile devices, even when this was not intended (Callegaro 2013). This might affect the quality of data collected: using the small input and output interfaces of these devices may increase measurement errors when the questionnaire, and its display, are not optimized for them (Mavletova 2013). Higher nonresponse rates – due to breaking off – are also likely to emerge, due to the fact that completing a Web questionnaire on mobile devices is more challenging and time-consuming (Callegaro 2010).
This study focuses on nonresponse issues and investigates factors influencing persistence in the use of mobile devices, along different waves of a panel survey. Within this perspective and using available data from the GESIS Online Panel Pilot (GOPP), we addressed three interrelated research questions. First, what is the prevalence rate of mobile devices in Web-based surveys designed for computer participation on a probabilistic sample?
Second, we examined to what extent the use of mobile devices is related to break-offs. Even if the available evidence appears partly equivocal (see Toepoel and Lugtig 2013 for an interesting exception), previous studies have found that the break-off rate is higher for mobile respondents, vis-à-vis the ones completing online questionnaires on a computer (Bosnjak et al. 2013; Buskirk and Andrus 2012; Callegaro 2010; de Bruijne and Wijnant 2013; Mavletova 2013; Peytchev and Hill 2010; SKOPOS 2013; Stapleton 2013). Wells, Justin, and Link (2013) have provided evidence for such an effect being related to the use of smartphones but not to tablets. Do results from the GOPP corroborate or disconfirm previous findings?
Third, using available longitudinal data, we explored the participation patterns and the propensity to continue using mobile devices across waves. Do mobile participants stick to this mode, even when they might have experienced the limitations of the small device input and output interfaces? Or do they switch to the mode of participation intended by the researcher? Can the propensity for persisting in the use of mobile devices be predicted on the basis of the social characteristics of respondents? The latter question aims to gain some insights into the profile of non-occasional mobile online respondents and, to the best of our knowledge, is completely new in the online survey methodology literature.
Data and Method
We carried out a secondary analysis of data collected in the GESIS Online Panel Pilot (GOPP, which is representative of the German-speaking adult population in Germany who use the Internet for non work-related purposes)[1] The GESIS Panel is a probability-based, mixed-mode panel recently developed in Germany as a research infrastructure for the social sciences.[2] Functional to this project, the GOPP was carried out in the period February 2011 – May 2012 as a pilot study. GOPP panelists were recruited using computer-assisted telephone interviewing interviews and invited to eight subsequent Web survey waves. Overall, the 1,010 participants who started the first Web survey formed the basis for our analysis.
We used the user agent string – collected by the Web survey server during the interviews – in order to distinguish the type of device used by respondents, as suggested by Callegaro (2010).
In order to estimate the prevalence rate of mobile devices, we carried out a pooled cross-sectional analysis on all the respondents who participated in at least the first online wave (6,070 interviews started by 1,010 respondents). We estimated the distribution of the devices used when starting the surveys, across the waves.
Our second research question, dealing with break-off rate and its association with the type of device, was addressed using the same dataset of interviews started. We estimated the distribution of break-offs across the three different types of device.
The third research question, on the pattern of use of mobile devices and on the propensity to use these devices along the GOPP waves, was based on the subsample of respondents who participated in all the 8 panel survey waves (n=587). Smartphone and tablet users were jointly classified as mobile respondents in this step of the analysis because of their small number.
We approached this third research question as follows: First, we identified usage patterns by examining the sequence of the type of device used across survey waves. This yielded four main patterns: (a) panelists who never used mobile devices to participate in survey waves and (b) panelists who almost always used mobile devices (minimum of 7 waves out of 8). Among the remaining panelists, we distinguished (c) the innovation-led participants, i.e., the ones who used these devices more often in the late waves and, conversely, (d) the ones who used mobile devices more often in the initial waves than in the late ones, suggesting that they came back to the more comfortable computer interfaces after having tried completion via mobile device. A residual fifth category was used where no consistent pattern was observed.
Second we modelled the propensity to use a mobile device along the pilot study, i.e., the number of times respondents used a mobile device in the 8 waves, with a zero-inflated Poisson (ZIP) regression model (see Long 1997 for a discussion). This allowed us to statistically estimate both whether respondents own and use a mobile device in everyday life, and their propensity to use it across the waves. Independent variables included in the model are gender, age, level of education, civil status, occupational status, and level of urbanization of the area of residence. A variable attitude toward technology was also included.
Findings
Mobile device prevalence rate is in the range of between 2.8 percent and 4.3 percent in each wave (Table 1). The use of smartphones is more common than the use of tablets. A slight tendency toward an increasing use of mobile devices is also apparent, albeit not statistically significant.
We consider these results to be consistent with previous studies from commercial panels (Bosnjak et al. 2013; SKOPOS 2013). The latter showed a higher rate (6 percent) of unintended mobile participants. It should be considered, however, that the GOPP probably provides a better representation of the general Internet population in Germany. It appears reasonable to assume that the more technologically-oriented people might be overrepresented in commercial opt-in panels.
Moving on to our second research question, mapping drop-out rates across devices, Table 2 shows the outcome of the interview by type of device used. The unit of analysis is the wave/person, as we pooled cross-sectional data from all the eight waves.
A higher drop-out rate for respondents using a smartphone is apparent. On average, the likelihood of dropping out for the latter almost triples when compared to respondents via PC or laptop. No significant differences were found when considering the use of a tablet. The differences discussed maintain their statistical significance after controlling for the questionnaire submitted (not shown). These results are consistent with previous findings from the afore-mentioned study by Wells, Justin, and Link (2013), highlighting clear differences between smartphone and tablet use.
Our third research question focuses on the propensity to repeatedly use mobile devices along the different waves of the GOPP.
The pattern of use of mobile devices is shown in Table 3. Not surprisingly, the first group of respondents is represented by the absolute majority (about 92 percent) of the respondents taking part in all the 8 waves using their computers.
About 8 of the interviewees, however, used a mobile device at least once. The ones who used almost only mobile devices make up about 2 percent of the interviewees. Innovation-led users cover a similar share of respondents. The group of respondents who used mobile devices more in the early waves accounts for a bit more than 1 percent of respondents. Finally, no clear pattern of use may be depicted for the remaining 2.6 percent of respondents who used a mobile device at least once.
As a second step, we addressed the propensity to use a mobile device by modelling – with a ZIP regression model – the number of times respondents used a mobile device in the 8 waves. We did not find any systematic effects, however, beyond the well known age effect: the use of mobile devices, in everyday life, is statistically estimated in the confidence interval between 6 percent and 16 percent; the older the respondent, the less likely he or she is to use mobile devices. No clear effects have been found on the propensity to repeatedly use mobile devices when answering the GOPP questionnaires (full results from this part of the analysis are available under the following link: http://dx.doi.org/10.6084/m9.figshare.1249723).
Summary and Discussion
Mobile device prevalence in online panels is not negligible. Our estimates for the last waves of the GOPP are around 4 percent of respondents per wave. Higher rates were estimated for about the same period when considering German commercial opt-in panels. It is reasonable to assume that this type of response will soon increase further, mirroring the wide-spreading of mobile devices. In our explorative study on respondents’ behavior along the 8 waves, we also found evidence for potential – occasional – unintended mobile response being even higher: 8 percent of the subjects used a mobile device at least once in the panel under examination.
Second, our study finds that answering the survey with a smartphone is clearly associated with a higher risk of abandoning it. These findings are consistent with previous research in the field and provide further evidence for this problem being associated with smartphones only.
Finally, we carried out an explorative analysis on the pattern of response and on the propensity to use a mobile device along the 8 waves. Our results show the importance of mobile only online respondents (2 percent). However, no clear differences, based on personal characteristics, could be detected in the propensity to repeatedly use mobile devices along waves, once one of these devices was owned. This certainly deserves further investigation.
Acknowledgment
The authors wish to thank Bella Struminskaya for her useful comments on a previous draft of this paper. The authors alone are responsible for the analysis presented.
Please see Struminskaya et al. (2014) for an assessment. The GOPP study is archived under the following link: https://dbk.gesis.org/dbksearch/gdesc2.asp?no=0058.