Survey Practice February 2011
The February issue of Survey Practice covers topics such as web survey design, bias in robo-polls, incentives, multiple-mode surveys, and cell phone survey procedures. All are topics that survey and public opinion researchers are currently discussing. We hope this issue contributes to the discussion.
In the April issue, we will have the annual list of recent books in public opinion, survey research, and survey statistics. Mario Callegaro has agreed to compile the list again this year. Please send Mario (firstname.lastname@example.org) information on any books on these topics published in the past year and any books that will be published soon. (Click 2009 and 2010 to see the previous lists.)
Mick Couper and his colleagues conducted research on the placement and design of navigation buttons and found that the design can affect the time required to complete a questionnaire. The good news is that the various designs did not appear to affect abandonments.
The increased use of cell phones has changed survey research and will continue to do so for some time. One evolving issue is the use of incentives to compensate for charges to the respondents. Bob Oldendick and Dennis Lambries conducted two surveys that randomly assigned cell phone respondents to incentive and non-incentive conditions. They found few differences in survey outcomes between the conditions.
In the paper by Tami Aritomi and Jason Hill, they describe research conducted with teachers to determine the long-term mode effects of switching from paper-only survey mode to a web survey. They found that trends in a long-term longitudinal study were not affected by the transition.
Jan van Lohuizen and Robert Samohyl look at the differences in responses between live operator surveys, Internet surveys, and robo-polls. They found that on presidential approval rating questions, the live operator and Internet surveys had similar results but the robo-polls produced a higher disapproval rating and fewer “no opinions”. They attribute the difference to non-response bias resulting from low participation rates in robo-polls.
Luciano Viera and his colleagues examined the differences between a list-assisted postal survey and a random digit dial survey of young adults. They found that the mail administration appeared to be more accurate, cost-effective, and efficient than the RDD survey.
Steven Coughlin and his colleagues report on an experiment using incentives as part of a survey of recent veterans. They found that incentives increased responses. However, the incentive groups often differed from the non-incentive groups on selected demographic characteristics.
As always, we welcome your comments on Survey Practice.
- John Kennedy
- Andy Peytchev
- David Moore
- Diane O’Rourke