Research Background
A few years ago, some respondents started answering web surveys through mobile devices, in particular tablets and smartphones, even if this was not planned by researchers and fieldwork companies. This phenomenon has been called the “unintended mobile response” (de Bruijne and Wijnant 2014; Peterson 2012; Wells, Bailey, and Link 2013). It has grown so quickly in the last couple of years, in many different countries (Callegaro 2010; Revilla et al. 2016), that it became not-negligible. For example, in Spain, the percentage of smartphone respondents over all surveys in the online access panel Netquest, which already increased in just one year from 7.7 percent (1 January–31 March 2013) to 12.0 percent, (January–31 March 2014), now reached 21.2 percent (1 January–15 March 2016).
However, personal computers (PCs) and mobile devices have different characteristics. In particular, mobile devices have a different kind of screens (mainly touch-screens, and, especially for smartphones, much smaller screens) and keyboards (virtual most of the time) than PCs. This may lead to a lower visibility. It also may require more efforts from the respondents both to read and answer the questions (zooming, scrolling, etc). As a consequence, it can increase satisficing (i.e., the tendency of not putting the maximum effort in answering the questions) and measurement errors (see e.g., McClain and Crawford 2013 or Stapleton 2013).
In addition, mobile devices allow a higher portability. Thus, we expected respondents to use them to complete surveys from any place (bus, metro, streets, bars, etc.). This may lead more frequently to the presence of bystanders (see e.g., Mavletova and Couper 2013), and therefore to a higher social desirability bias than when using PCs, even if past evidence does not always support this hypothesis (see e.g., Mavletova 2013). Besides, the higher portability could also increase the multitasking, the interruptions, and the distraction of the respondents, which could, then, lead to higher measurement errors.
Overall, these differences in the devices’ characteristics can affect both the comparability of PCs and mobile devices answers and the quality of web surveys results. Thus, a lot of researchers started focusing on this topic. Many studies compared the results when the survey is answered by means of PCs or mobile devices: for example, Peytchev and Hill (2010) found no effect of the orientation of the scale on the answers’ distributions, but they found some context effect. Toepoel and Lugtig (2014) studied break offs, item nonresponse, completion time, characters typed in open-ended questions, number of responses in a check-all-that-apply question, and found no differences between mobile and PC respondents. Some studies also compared different layouts of the survey on both devices: for instance, de Bruijne and Wijnant (2013) compared a regular and a mobile web layout. They found similar response rates, almost no break-off rates, similar substantive answers, but they also found slightly longer completion times and lower respondents’ satisfaction for the mobile layout.
A Cross-over Experiment Comparing PCs and Smartphones
We implemented in February–March 2015 a two-wave cross-over experiment in Spain. This experiment was inspired by the one developed by Mavletova and Couper (2013) in Russia. It focuses on sensitive topics (e.g., alcohol consumption, deviant behaviours). In this experiment, panelists from the Netquest opt-in panel ( www.netquest.com) were invited to participate twice to the same survey. Only panelists who had access to both a PC and a smartphone were eligible. In each wave, these panelists were randomly assigned to one of the following conditions: PC, smartphone optimized (SO; the layout is automatically adapted to the screen size) or smartphone nonoptimized (SNO; the layout is similar as the PC one; scrolling horizontally and zooming are necessary most of the time). In that way, we obtained nine groups: three control groups (i.e., panelists were assigned twice to the same survey condition), and six treatment groups (i.e., panelists were assigned to different conditions in both waves). In total, 1,800 respondents completed the first wave’s questionnaire (200 respondents per group), and 1,608 of them finished the second wave (between 165 and 188 respondents per group). Panelists were forced to complete the survey using the assigned device.
This experimental design allows studying the effects of the device and of the optimization on different indicators, both between-subject (across split-ballot groups) and within-subject (across waves).
Cross quotas for age and gender were used to guarantee that the sample distribution for these variables was similar to the one observed in the panel.
The questionnaires proposed to the respondents in each group are available at the following links: http://goo.gl/g9gAE4 (for PC); http://goo.gl/5jF2vr (for SO); and http://goo.gl/4c9d1C (for SNO).
The 10 Main Findings
In this section, we synthesize in a very concise way the 10 main findings of this experiment. For further details about the experiment and/or the results, we refer to Revilla and Ochoa (2015), Toninelli and Revilla (2016) and Revilla, Toninelli, and Ochoa (2017). The main findings are the following:
- The large majority of smartphone respondents participated in the survey from home, even if these devices are highly portable (77.1 percent in wave 1, and 81.7 percent in wave 2).
- The presence of third parties is significantly higher for smartphone participants than for PC ones (27.0 percent vs. 19.8 percent in wave 1, p=0.00; 29.4 percent vs. 16.8 percent in wave 2, p=0.00). However, the perceived privacy and the perceived sensitivity of the questions are similar for smartphone and PC respondents.
- No significant effect of the device was found in terms of reporting of sensitive information for four sensitive indexes tested (using Linear Mixed Models). This does not support the idea of a higher social desirability bias for smartphone respondents.
- When measured by an Instructional Manipulation Check (IMC),[1] the quality is significantly lower for the SNO condition, if compared to the SO and to the personal computer one: in wave 1, 81.6 percent of respondents properly followed the instruction in the SNO condition vs. 88.8 percent in the SO condition (p=0.00) and 89.0 percent in the PC condition (p=0.00). In wave 2, these proportions are, respectively, 76.7 percent, 89.2 percent (p=0.00), and 84.5 percent (p=0.00).
- In one grid, the nondifferentiation (measured by the average variance of the answers) is higher for smartphones, but this depends on the questions studied.
- In open questions, there are no differences in the percentages of item nonresponse, of nonsense and of “don’t know” answers. However, the number of characters typed in is significantly lower for smartphones. Applying a Linear Mixed Model to explain the number of characters typed in, we found, depending on the open question, significant coefficients (in all cases p=0.00), between 10.9 and 22.4, for the PC condition vs. the SNO condition.
- For order-by-click questions, the option ranked in the first position does not change across conditions, but the following positions vary slightly. Within smartphone respondents, there are also fewer respondents who selected the number of options required in the instructions (between 6.9 percent and 28.2 percent less, in wave 1).
- Significantly longer median completion times are observed for smartphone respondents, for different types of question formats (grids, open questions, and order-by-click questions). In some cases, there is a significant difference in completion times between the SO and the SNO groups (e.g., we observed longer completion times for open questions, when the survey is not optimized). However, this phenomenon is not systematic, and it is not always going in the same direction.
- Significantly more respondents in the SNO group (if compared to the SO group) use the smartphone in landscape view: 34.6 percent vs. 9.9 percent in wave 1 (p=0.00) and 28.0 percent vs. 11.6 percent in wave 2 (p=0.00). This suggests that the optimization efficiently reduces the need of switching the smartphone orientation.
- The way the questionnaire is optimized for smaller screens is not always optimal in terms of data quality. For instance, if we measure the data quality in terms of primacy effect in the case of order-by-click questions, the nonoptimized version performs better than the optimized one. It seems that, sometimes, the improvement of the device usability does not contribute to increase the data quality. In some cases, it can even be reduced (e.g., in order to avoid scrolling horizontally, longer lists are provided vertically, which can generate more primacy effect).
Conclusions
We expected discrepancies between PCs and smartphones respondents on some indicators (e.g., quality indicators, completion times) and for various questions formats (grids, open questions, order-by-click, and so on), mainly because of the differences in levels of portability and kinds of screens and keyboards. In order to test this, we implemented a two-wave cross-over experiment in Spain, using the Netquest panel. We did not find support for all the hypothesized differences across devices. For instance, even if smartphones are more portable, most respondents still answer from home when using them. Nevertheless, we observed significant differences across devices for several quality indicators (e.g., significantly more characters typed in open questions for the PC condition or significantly lower percentages of respondents following correctly the IMC in the SNO condition).
More research is needed to test the robustness of the results and to further study the mechanisms behind some of the observed differences. However, at this point, our recommendations for web surveys designers are the following:
- Always take into account the devices used by respondents to complete the survey. This can affect the answering process and thus the collected data.
- Always carefully check how your survey looks like on the different devices that may be used by respondents.
- Be careful about the survey optimization for smartphones: for some questions, the optimized layout affects negatively the quality of the collected data. Besides, the optimized version may look differently on different smartphones (e.g., on iOS vs. Android smartphones). Thus, it is crucial to test the survey for different kinds of smartphones too.
- In order to obtain a higher comparability of data across devices, we recommend to adapt the PC version as well as the mobile version. Keeping the layout for PCs as it used to be before the appearance of mobile devices is not optimal. For instance, it is better not only for smartphones, but also for PCs, to avoid grids with many items and many option categories. This does not mean that the PC and smartphone layouts need to be exactly the same: we recommend to look for a balance between quality for each device and comparability.
An IMC “consists of a question embedded within the experimental materials that asks participants (…) to provide a confirmation that they have read the instruction.” (Oppenheimer, Meyvis, and Davidenko 2009, 867). We should notice that in this study, the IMC was included within a set of questions presented in a grid in the PC and SNO versions but as item-by-item questions in the SO version. The instruction was the following: “To confirm that you are reading this text, please, do not select any answer but click here”.