This study presents the results of an investigation conducted to determine the characteristics of respondents who use different platforms to complete web-based surveys. To date, limited research has evaluated differences between surveys completed on different devices (Couper 2013). Thus, this work proposes an analysis of two sets of nationally representative panel data compiled from online surveys administered between December 2012 and February 2013.
The increased use of smartphones and tablets to go online has propelled web-based surveys to the forefront of public opinion research. Scholars have initially drawn comparisons between web-based surveys and other survey modes (e.g., face-to-face, telephone) to understand the implications of adopting online survey tools. Given the panoply of existing studies with often diverging conclusions, further research is clearly needed to better understand the effect of survey mode on completion rates and data quality.
Given the exponential penetration rate and usage of mobile devices, such as smart phones, netbooks, and tablets, research investigating surveys administered on mobile devices are of increasing interest in the field (Buskirk and Andres 2013). Data indicates that a stronger majority of survey participants now completes surveys using mobile devices (e.g., smartphones and tablets). Almost 60 percent of adult Americans choose cellphones or laptops to go online (Smith 2010). This creates both opportunities and challenges for self-administered surveys to be conducted more efficiently (Peytchev and Hill 2010).
Empirical evidence indicates that surveys completed on mobile devices and personal computers elicited similar response quality, yet lower response rates and increased time for completion for surveys completed on mobile devices (de Bruijne and Wijnant 2013; Guidry 2012). Additionally, less than 1 percent of the population complete web-based surveys on a tablet (Guidry 2012; McClain, Crawford, and Dugan 2012). Compared to using a tablet, surveys completed using smartphones have a higher proportion of survey breakoffs (Guidry 2012). However, both smartphones and tablets have a higher breakoff rate compared to responses acquired via personal computer.
To contribute to the existing discussion on such a pressing issue in survey research and to provide a clearer picture of potential differences among respondents who utilize different devices to complete surveys, the present study focused on three overarching research questions (RQs):
- RQ1: Do participants vary by age in their choice of device used to complete web-based surveys?
- RQ2: Are there significant differences by gender among device used to complete web-based surveys?
- RQ3: Are there any significant differences by education levels among device used to complete web-based surveys?
Two web-based surveys, partially funded by university grants, were administered to national representative panels in the United States between December 2012 and February 2013. Each survey focused on different outcome variables. Study 1 pertained directly to the processing levels of political information while Study 2 focused essentially on the usage of social media to voice an opinion.
Sample and Procedure
Study 1. Supported by a university grant, a national panel was recruited by The Sample Network (Cherry Hill, NJ, USA), a private sample company. Each participant received nominal compensation ($3.00) to complete the survey. An initial total of 550 questionnaires were completed, yet the elimination of incomplete and partial responses reduced the sample size to 487. The pool of respondents, representative of the US population, consisted of 48.5 percent female and 51.3 percent male (0.2 percent preferred not to answer) aged 48 (SD=14.07). The survey was fielded for one week starting December 5, 2012, to December 12, 2012.
Study 2. Participants were recruited by Toluna (Wilton, CT, USA), a professional survey company contracted to collect a sample of US adults (n=1,046). Potential respondents were contacted by the company and asked to voluntarily participate in exchange for credit to be used in their internal reward system. A total of 1,871 people responded to the survey solicitation. After eliminating incomplete questionnaires, the final data set yielded 1,046 responses. The pool of respondents consisted of 50.8 percent female and 49.2 percent males, aged 44 (SD=15.81). The survey was fielded for one week from February 19, 2013, to February 26, 2013.
Measures. Participants in both panels were asked to share how they responded to the survey, selecting between desktop computer, laptop, tablet, or smartphone. Age was determined through an open-ended question, and education level required participants to select the option corresponding to the highest education degree obtained (some high school, high school/GED, some college, a 2-year degree, a 4-year degree, some graduate school, a graduate degree). Participants also indicated their gender by selecting either male, female, or prefer not to answer (see Table 1).
RQ1 focused on the potential differences in age in the usage of a device to complete a survey. Study 1 indicated a significant main effect of age on device, F(3, 483)=4.13, p<0.01. A Student-Newman-Keuls (SNK) post-hoc analysis showed that people who completed the survey on a smartphone (M=37.55; SD=12.40) were significantly younger than those who used a desktop computer (M=49.50; SD=14.30). Data revealed no significant differences in age between respondents who completed the survey using all other devices.
Study 2 also revealed a significant main effect of age on device used, F(3, 1042)=13.10, p<0.001. A SNK post-hoc analysis showed that respondents who completed the survey using a smartphone (M=37.68, SD=12.59) were significantly younger than those who completed the survey from a desktop (M=47.70, SD=15.91). Data showed no significant differences in age between respondents who completed the survey using all other devices.
RQ2 pertained directly to differences in the selection of a device by gender. Using a crosstabs analysis, Study 1 found no significant differences in the selection of a device by gender, [X2(3, N=486)=3.87, p=0.27]. Conversely, results of Study 2 found that device use did significantly differ depending gender [X2(3, N=1,045)=24.44, p<0.001]. Although males were more likely to use a desktop (57.40 percent), females were more likely to use laptops (55.3 percent), smartphones (66.7 percent), and tablets (72.7 percent).
RQ3 focused on differences in the selection of a device by education levels. Study 1 showed the absence of a significant difference, F(3, 483)=0.50, p=0.70. Study 2 revealed a similar trend with no significant main effect of device used on level of education, F(3, 1041)=0.56, p=0.64. (See Table 2 for a summary of age differences.)
In this analysis of two national panels, differences between survey respondents who participate using varying devices surfaced. Such results provide valuable insights for practitioners eager to further web-based surveys and understand specific patterns of participation.
Recent reports published reveal that younger populations are twice as likely to use a smartphone to access the Internet (Brenner 2013). Moreover, the majority of tablet owners are 18 to 49 years of age (Pew 2012), a trend discerned in both panels reviewed here. This study confirmed that if younger participants are of interest to researchers, smartphone compatibility should be the primary concern during the survey design phase. Additionally, this finding is relevant for public opinion scholars eager to further elucidate questions associated with screen sizes, visual features, and item-missing data. The particular inquiry of survey methodological issues associated with smartphones may, in fact, require an experimental design.
Although both studies discovered a similar pattern of device usage by gender, the absence of any significant differences for Study 1 showed the plausible influence of sample size on the examination of the topic. Study 1, which consisted of nearly 500 respondents, compared to Study 2 encompassing more than 1,000 people indicates the importance of having larger sample sizes specifically for the examination of the relationship between devices selection and survey completions. The discussion over sample sizes is hardly new in public opinion scholarship. Nonetheless, this finding reinforces the need to carefully consider this facet when reviewing survey behaviors on multiple devices.
Finally, the present analysis underscored the absence of any significant differences by levels of education as all participants reported at least some college experience. This information opens the discussion of the relationship between socioeconomic level and device usage. Importantly, if this finding was to be unearthed in additional studies, it would enable public opinion scholars to have a better sense of which outcome variables may be more pertinent to test in web-based surveys.
Limitations and Further Research
The present analysis examined the plausible influence and differences that may surface from taking surveys on a smartphone, a tablet, laptop, or desktop computer.
Several limitations must be documented. The first drawback originated from the differences in sample sizes. While both studies commented here relied on national panels, the disparity in sample sizes may be a point of contention. The conclusions presented here also need to be placed in an overall context of low tablet usage. As noted, less than 1 percent of survey respondents completed surveys on such devices, a case equally determined in this study (Guidry 2012; McClain, Crawford, and Dugan 2012). Further inquiries consequently need to be developed to delineate more accurately survey behaviors associated with each device. In summary, additional examinations must continue developing a research agenda on differences between surveys taken on different devices to advance scholarship.