Loading [Contrib]/a11y/accessibility-menu.js
Skip to main content
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Blog
  • Subscribe
  • search

RSS Feed

Enter the URL below into your favorite RSS reader.

http://localhost:42874/feed
Articles
Vol. 8, Issue 1, 2015April 30, 2015 EDT

Survey Participation via Mobile Devices in a Probability-based Online-Panel: Prevalence, Determinants, and Implications for Nonresponse

Teresio Poggio, Michael Bosnjak, Kai Weyandt,
mobilewebsurveyonline mobile surveysmarphonenonresponse
https://doi.org/10.29115/SP-2015-0002
Survey Practice
Poggio, Teresio, Michael Bosnjak, and Kai Weyandt. 2015. “Survey Participation via Mobile Devices in a Probability-Based Online-Panel: Prevalence, Determinants, and Implications for Nonresponse.” Survey Practice 8 (1). https:/​/​doi.org/​10.29115/​SP-2015-0002.
Save article as...▾

View more stats

Abstract

Based on (para)data from the probabilistic GESIS Online Panel Pilot Panel (Feb 2011–May 2012), we investigated the use of mobile devices, namely their prevalence, usage determinants, and implications for nonresponse. We found that prevalence is in the range 2.8–4.3 percent within the single survey waves. However, the share of panelists who at least occasionally use mobile devices across the eight waves considered is higher (about 8%). No significant determinants were found for the use of mobile devices, despite the well-known age effect (larger mobile usage propensity for younger compared to older panelists). Finally, we replicate previous findings on the risk of dropping out being associated with the use of smartphones but not with the use of tablets.

Introduction and Research Questions

The widespread use of Internet-enabled mobile devices, such as smartphones and tablets, is impressive: 58 percent of U.S. adults own a smartphone and 42 percent have a tablet (Pew Research Center 2014). Mobile devices are spreading widely across Europe, too: in 2012, 40 percent of the German population were using a smartphone, and 15 percent a tablet (IAB Europe 2012). Therefore, self-administered online mobile surveys have become a reality and seem a promising mode for data collection. Related opportunities include the possibility to reach subjects virtually anywhere and at any time, to further accelerate data collection, and to facilitate contact with hard-to-reach individuals (Fuchs 2008; Toepoel and Lugtig 2013). Typical devices also allow rich data and paradata – video, audio, position – to be collected (Buskirk and Andrus 2012).

Several related methodological challenges have also emerged. Coverage is currently a problem, as many individuals still do not use these devices. Other challenges already affect current Web surveys, as a not-negligible share of respondents spontaneously answers through mobile devices, even when this was not intended (Callegaro 2013). This might affect the quality of data collected: using the small input and output interfaces of these devices may increase measurement errors when the questionnaire, and its display, are not optimized for them (Mavletova 2013). Higher nonresponse rates – due to breaking off – are also likely to emerge, due to the fact that completing a Web questionnaire on mobile devices is more challenging and time-consuming (Callegaro 2010).

This study focuses on nonresponse issues and investigates factors influencing persistence in the use of mobile devices, along different waves of a panel survey. Within this perspective and using available data from the GESIS Online Panel Pilot (GOPP), we addressed three interrelated research questions. First, what is the prevalence rate of mobile devices in Web-based surveys designed for computer participation on a probabilistic sample?

Second, we examined to what extent the use of mobile devices is related to break-offs. Even if the available evidence appears partly equivocal (see Toepoel and Lugtig 2013 for an interesting exception), previous studies have found that the break-off rate is higher for mobile respondents, vis-à-vis the ones completing online questionnaires on a computer (Bosnjak et al. 2013; Buskirk and Andrus 2012; Callegaro 2010; de Bruijne and Wijnant 2013; Mavletova 2013; Peytchev and Hill 2010; SKOPOS 2013; Stapleton 2013). Wells, Justin, and Link (2013) have provided evidence for such an effect being related to the use of smartphones but not to tablets. Do results from the GOPP corroborate or disconfirm previous findings?

Third, using available longitudinal data, we explored the participation patterns and the propensity to continue using mobile devices across waves. Do mobile participants stick to this mode, even when they might have experienced the limitations of the small device input and output interfaces? Or do they switch to the mode of participation intended by the researcher? Can the propensity for persisting in the use of mobile devices be predicted on the basis of the social characteristics of respondents? The latter question aims to gain some insights into the profile of non-occasional mobile online respondents and, to the best of our knowledge, is completely new in the online survey methodology literature.

Data and Method

We carried out a secondary analysis of data collected in the GESIS Online Panel Pilot (GOPP, which is representative of the German-speaking adult population in Germany who use the Internet for non work-related purposes)[1] The GESIS Panel is a probability-based, mixed-mode panel recently developed in Germany as a research infrastructure for the social sciences.[2] Functional to this project, the GOPP was carried out in the period February 2011 – May 2012 as a pilot study. GOPP panelists were recruited using computer-assisted telephone interviewing interviews and invited to eight subsequent Web survey waves. Overall, the 1,010 participants who started the first Web survey formed the basis for our analysis.

We used the user agent string – collected by the Web survey server during the interviews – in order to distinguish the type of device used by respondents, as suggested by Callegaro (2010).

In order to estimate the prevalence rate of mobile devices, we carried out a pooled cross-sectional analysis on all the respondents who participated in at least the first online wave (6,070 interviews started by 1,010 respondents). We estimated the distribution of the devices used when starting the surveys, across the waves.

Our second research question, dealing with break-off rate and its association with the type of device, was addressed using the same dataset of interviews started. We estimated the distribution of break-offs across the three different types of device.

The third research question, on the pattern of use of mobile devices and on the propensity to use these devices along the GOPP waves, was based on the subsample of respondents who participated in all the 8 panel survey waves (n=587). Smartphone and tablet users were jointly classified as mobile respondents in this step of the analysis because of their small number.

We approached this third research question as follows: First, we identified usage patterns by examining the sequence of the type of device used across survey waves. This yielded four main patterns: (a) panelists who never used mobile devices to participate in survey waves and (b) panelists who almost always used mobile devices (minimum of 7 waves out of 8). Among the remaining panelists, we distinguished (c) the innovation-led participants, i.e., the ones who used these devices more often in the late waves and, conversely, (d) the ones who used mobile devices more often in the initial waves than in the late ones, suggesting that they came back to the more comfortable computer interfaces after having tried completion via mobile device. A residual fifth category was used where no consistent pattern was observed.

Second we modelled the propensity to use a mobile device along the pilot study, i.e., the number of times respondents used a mobile device in the 8 waves, with a zero-inflated Poisson (ZIP) regression model (see Long 1997 for a discussion). This allowed us to statistically estimate both whether respondents own and use a mobile device in everyday life, and their propensity to use it across the waves. Independent variables included in the model are gender, age, level of education, civil status, occupational status, and level of urbanization of the area of residence. A variable attitude toward technology was also included.

Findings

Mobile device prevalence rate is in the range of between 2.8 percent and 4.3 percent in each wave (Table 1). The use of smartphones is more common than the use of tablets. A slight tendency toward an increasing use of mobile devices is also apparent, albeit not statistically significant.

Table 1  Type of device used when starting the survey, by wave.
Computer Smartphone Tablet
Wave 1 97.2% 1.8% 1.0% (n=1,010)
Wave 2 96.9% 1.8% 1.3% (n=795)
Wave 3 96.9% 1.8% 1.3% (n=762)
Wave 4 96.6% 1.9% 1.5% (n=746)
Wave 5 96.1% 2.4% 1.5% (n=712)
Wave 6 96.6% 1.8% 1.6% (n=685)
Wave 7 96.4% 2.3% 1.3% (n=689)
Wave 8 95.7% 2.5% 1.8% (n=671)
All waves 96.6% 2.0% 1.4% (n=6,070)

χ2 corrected for research design: F (9.81, 9896.02)=0.8350; p=0.59.

We consider these results to be consistent with previous studies from commercial panels (Bosnjak et al. 2013; SKOPOS 2013). The latter showed a higher rate (6 percent) of unintended mobile participants. It should be considered, however, that the GOPP probably provides a better representation of the general Internet population in Germany. It appears reasonable to assume that the more technologically-oriented people might be overrepresented in commercial opt-in panels.

Moving on to our second research question, mapping drop-out rates across devices, Table 2 shows the outcome of the interview by type of device used. The unit of analysis is the wave/person, as we pooled cross-sectional data from all the eight waves.

Table 2  Outcome of the interview, by device used when starting the survey (all waves).
Completed Dropped out
Computer 96.7% 3.3% (n=5,864)
Smartphone 91.0% 9.0% (n=122)
Tablet 98.8% 1.2% (n=84)
All devices 96.6% 3.4% (n=6,070)

χ2 corrected for research design: F (1.99, 2007.30)=5.7581; p=0.00.

A higher drop-out rate for respondents using a smartphone is apparent. On average, the likelihood of dropping out for the latter almost triples when compared to respondents via PC or laptop. No significant differences were found when considering the use of a tablet. The differences discussed maintain their statistical significance after controlling for the questionnaire submitted (not shown). These results are consistent with previous findings from the afore-mentioned study by Wells, Justin, and Link (2013), highlighting clear differences between smartphone and tablet use.

Our third research question focuses on the propensity to repeatedly use mobile devices along the different waves of the GOPP.

The pattern of use of mobile devices is shown in Table 3. Not surprisingly, the first group of respondents is represented by the absolute majority (about 92 percent) of the respondents taking part in all the 8 waves using their computers.

Table 3  Pattern of use of mobile devices – Respondents to all the eight waves only.
Pattern of use %
No use of mobile devices 91.7
Almost only mobile users 2.0
Innovation-led users 2.2
Tried mobile and back 1.4
Other (no clear pattern) 2.7
All (n=587) 100.0

About 8 of the interviewees, however, used a mobile device at least once. The ones who used almost only mobile devices make up about 2 percent of the interviewees. Innovation-led users cover a similar share of respondents. The group of respondents who used mobile devices more in the early waves accounts for a bit more than 1 percent of respondents. Finally, no clear pattern of use may be depicted for the remaining 2.6 percent of respondents who used a mobile device at least once.

As a second step, we addressed the propensity to use a mobile device by modelling – with a ZIP regression model – the number of times respondents used a mobile device in the 8 waves. We did not find any systematic effects, however, beyond the well known age effect: the use of mobile devices, in everyday life, is statistically estimated in the confidence interval between 6 percent and 16 percent; the older the respondent, the less likely he or she is to use mobile devices. No clear effects have been found on the propensity to repeatedly use mobile devices when answering the GOPP questionnaires (full results from this part of the analysis are available under the following link: http://dx.doi.org/10.6084/m9.figshare.1249723).

Summary and Discussion

Mobile device prevalence in online panels is not negligible. Our estimates for the last waves of the GOPP are around 4 percent of respondents per wave. Higher rates were estimated for about the same period when considering German commercial opt-in panels. It is reasonable to assume that this type of response will soon increase further, mirroring the wide-spreading of mobile devices. In our explorative study on respondents’ behavior along the 8 waves, we also found evidence for potential – occasional – unintended mobile response being even higher: 8 percent of the subjects used a mobile device at least once in the panel under examination.

Second, our study finds that answering the survey with a smartphone is clearly associated with a higher risk of abandoning it. These findings are consistent with previous research in the field and provide further evidence for this problem being associated with smartphones only.

Finally, we carried out an explorative analysis on the pattern of response and on the propensity to use a mobile device along the 8 waves. Our results show the importance of mobile only online respondents (2 percent). However, no clear differences, based on personal characteristics, could be detected in the propensity to repeatedly use mobile devices along waves, once one of these devices was owned. This certainly deserves further investigation.

Acknowledgment

The authors wish to thank Bella Struminskaya for her useful comments on a previous draft of this paper. The authors alone are responsible for the analysis presented.


  1. Please see Struminskaya et al. (2014) for an assessment. The GOPP study is archived under the following link: https://dbk.gesis.org/dbksearch/gdesc2.asp?no=0058.

  2. http://www.gesis-panel.org.

References

Bosnjak, M., T. Poggio, K.R. Becker, F. Funke, A. Wachenfeld, and B. Fischer. 2013. “Online Survey Participation Via Mobile Devices.” In The 68th Annual Conference of the American Association for Public Opinion Research. Boston, MA.
Google Scholar
Bruijne, M. de, and A. Wijnant. 2013. “Comparing Survey Results Obtained via Mobile Devices and Computers: An Experiment with a Mobile Web Survey on a Heterogeneous Group of Mobile Devices versus a Computer-Assisted Web Survey.” Social Science Computer Review 31 (4): 482–504.
Google Scholar
Buskirk, T.D., and C. Andrus. 2012. “Smart Surveys for Smartphones: Exploring Various Approaches for Conducting Online Mobile Surveys via Smartphones” 5 (1). http:/​/​www.surveypractice.org/​index.php/​SurveyPractice/​article/​view/​63/​.
Google Scholar
Callegaro, M. 2010. “Do You Know Which Device Your Respondent Has Used to Take Your Online Survey?” Survey Practice 3 (6). http:/​/​www.surveypractice.org/​index.php/​SurveyPractice/​article/​view/​250/​.
Google Scholar
———. 2013. “From Mixed-Mode to Multiple Devices.” International Journal of Market Research 55 (2): 317–20.
Google Scholar
Fuchs, M. 2008. “Mobile Web Surveys: A Preliminary Discussion of Methodological Implications.” In Envisioning the Survey Interview of the Future, edited by F.G. Conrad and M.F. Schober. Hoboken, NJ: Wiley.
Google Scholar
IAB Europe. 2012. “Consumer Barometer 2012.” 2012. http:/​/​www.consumerbarometer.com.
Long, J.S. 1997. Regression Models for Categorical and Limited Dependent Variables. Thousand Oaks, CA: Sage.
Google Scholar
Mavletova, A. 2013. “Data Quality in PC and Mobile Web Surveys.” Social Science Computer Review 31 (6): 725–43.
Google Scholar
Pew Research Center. 2014. “Key Indicators: Device Ownership.” 2014. http:/​/​www.pewresearch.org/​data-trend/​media-and-technology/​device-ownership/​.
Peytchev, A., and C.A. Hill. 2010. “Experiments in Mobile Web Survey Design: Similarities to Other Modes and Unique Considerations.” Social Science Computer Review 28 (3): 319–35.
Google Scholar
SKOPOS. 2013. “Mobile Research Performance: How Mobile Respondents Differ from PC Users Concerning Interview Quality, Drop-out Rates and Sample Structure.” In The 15th General Online Research (GOR) Conference. Mannheim, Germany.
Google Scholar
Stapleton, C.E. 2013. “The Smartphone Way to Collect Survey Data.” Survey Practice 6 (2). http:/​/​www.surveypractice.org/​index.php/​SurveyPractice/​article/​view/​75.
Google Scholar
Struminskaya, B., L. Kaczmirek, I. Schaurer, and W. Bandilla. 2014. “Assessing Representativeness of a Probability-Based Online Panel in Germany.” In Online Panel Research: A Data Quality Perspective, edited by M. Callegaro, R.P. Baker, J. Benthelhem, A.S. Göritz, J.A. Krosnik, and P.J. Lavrakas. New York: Wiley.
Google Scholar
Toepoel, V., and P. Lugtig. 2013. “Mobile Pevices a Way to Recruit Hard-to-Reach Groups? Results from a Pilot Study Comparing Desk Top and Mobile dDvice Surveys.” In The 5th Conference of the European Survey Research Association. Ljubljana, Slovenia.
Google Scholar
Wells, T., B. Justin, and M. Link. 2013. “Filling the Void: Gaining a Better Understanding of Tablet-Based Surveys.” Survey Practice 6 (1). http:/​/​www.surveypractice.org/​index.php/​SurveyPractice/​article/​view/​25/​.
Google Scholar

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Powered by Scholastica, the modern academic journal management system