Loading [Contrib]/a11y/accessibility-menu.js
Skip to main content
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Blog
  • Subscribe
  • search

RSS Feed

Enter the URL below into your favorite RSS reader.

http://localhost:12504/feed
Articles
Vol. 10, Issue 2, 2017February 28, 2017 EDT

Using Tablet Computers to Implement Surveys in Challenging Environments

Lindsay J Benstead, Kristen Kao, Pierre F Landry, Ellen M Lust, Dhafer Malouche,
local governancesub-saharan africamiddle east and north africatablet computerschallenging survey environmentscomputer-assisted personal interviewing (capi)
https://doi.org/10.29115/SP-2017-0009
Survey Practice
Benstead, Lindsay J, Kristen Kao, Pierre F Landry, Ellen M Lust, and Dhafer Malouche. 2017. “Using Tablet Computers to Implement Surveys in Challenging Environments.” Survey Practice 10 (2). https:/​/​doi.org/​10.29115/​SP-2017-0009.
Save article as...▾
Download all (2)
  • Figure 1   The survey is designed and monitored using the STG interface.
    Download
  • Figure 2   ASUS Memo Pad 8 and fielding in Malawi. Photos: Kristen Kao.
    Download

Sorry, something went wrong. Please try again.

If this problem reoccurs, please contact Scholastica Support

Error message:

undefined

View more stats

Abstract

Computer-assisted personal interviewing (CAPI) has increasingly been used in developing countries, but literature and training on best practices have not kept pace. Drawing on our experiences using CAPI to implement the Local Governance Performance Index (LGPI) in Tunisia and Malawi and an election study in Jordan, this paper makes practical recommendations for mitigating challenges and leveraging CAPI’s benefits to obtain high quality data. CAPI offers several advantages. Tablets facilitate complex skip patterns and randomization of long question batteries and survey experiments, which helps to reduce measurement error. Tablets’ global positioning system (GPS) technology reduces sampling error by locating sampling units and facilitating analysis of neighborhood effects. Immediate data uploading, time-stamps for individual questions, and interview duration capture allowed real time data quality checks and interviewer monitoring. Yet, CAPI entails challenges, including costs of learning new software; questionnaire programming; and piloting to resolve coding bugs; and ethical and logistical considerations, such as electricity and Internet connectivity.

Introduction

Computer-assisted personal interviewing (CAPI) using smartphones, laptops, and tablets has long been standard in western countries. Since the 1990s, CAPI has also been used in developing countries (Bethlehem 2009, 156–60) and is increasingly utilized in developing countries as well.

Unlike standard paper and pencil interviewing, where the interviewer records responses on paper and manually codes them into a computer, CAPI allows the interviewer to record answers directly onto a digital device. When used skillfully, CAPI reduces coding errors, particularly those arising from skip patterns, and decreases the time needed to produce results. Yet, challenges must be navigated to leverage CAPI’s benefits and avoid mistakes.

The Program on Governance and Local Development (GLD) used tablets to administer the Local Governance Performance Index (LGPI) in Tunisia and Malawi and an electoral study in Jordan - three challenging contexts with poor infrastructure.[1] CAPI offered advantages vis-Ã -vis earlier surveys conducted by GLD team members in Morocco, Algeria, Tunisia, Libya, Egypt, and China. CAPI facilitated a long, complicated questionnaire and random assignment of modules, greatly reducing implementation errors. Randomized response options eliminated primacy and recency effects, while CAPI enforced rules, such as forcing a response entry before continuing, which decreased item nonresponse. The ability to program the Kish table to automatically select a participant greatly eased respondent selection and possibly reduced sampling error (Kish 1965). Daily data uploading permitted quick identification of implementation, sampling, randomization, and interviewer compliance problems. It also eliminated data entry, which is costly and error-prone. However, to take advantage of CAPI’s benefits, researchers must recognize a number of pitfalls and implement practices to avoid them.

Literature Review

Existing literature offers mixed evidence with regard to CAPI’s impact on survey estimates and data quality. Caeyers, Chalmers, and De Weerdt (2012) find, for instance, that CAPI eliminates mistakes arising from skip patterns and impacts estimates of key variables. While respondents and interviewers favor CAPI (Couper and Burt 1994), respondent participation and attrition are not affected by it. At the same time, CAPI’s impact on missing data is unclear and context dependent. Watson and Wilkins (2015) find that CAPI increases interview length.

Existing literature examining how CAPI affects survey quality has several limitations. First, it is small and should be expanded to take into account how CAPI’s effects on survey quality vary across survey type, hardware/software, or other factors. Second, little research focuses on how CAPI affects data quality or respondent-interviewer experience and interactions in developing countries, which may differ from developed countries in ways that have not yet been fully explored (e.g., Seligson and Moreno Morales 2016). For example, rates of item-missing data, for some items in developing countries (Benstead 2017), might be reduced by using CAPI. Third, with few exceptions, textbooks do not offer concrete tips on how to implement CAPI (Caviglia-Harris et al. 2011; Groves et al. 2009).

This article provides guidance on choices and considerations made while implementing surveys in Tunisia, Malawi, and Jordan. It proceeds by detailing the survey process, assessing tablets’ effects on implementation and data quality, and offering recommendations for using CAPI effectively. Finally, it considers ethical issues raised by CAPI.

The Tablets and Software

Using CAPI begins with selecting software based on script capabilities and cost. A website such as this one - http://www.capterra.com/survey-software/ - lists software options and can be a helpful place to begin. We selected SurveyToGo (STG 2016) because it has a UTF/Unicode compliant interface, allowing for multilingual and multiscript questionnaires, including Arabic. STG charges by the number of observations recorded (i.e., responses to any given question).

STG has other advantages as well. It allows branching, skipping, looping, validation, piping, and randomizing questions, answers, and chapters. Its desktop emulator (Figure 1) allows researchers working online to see how the questionnaire will appear, but some features, such as randomization, do not work in the emulator. Since the emulator is hosted online, the team can test it from anywhere, but only one person can edit the questionnaire in the emulator at any given time.

Figure 1   The survey is designed and monitored using the STG interface.

Once the survey is developed in STG, it is downloaded onto tablets. If changes are needed later (e.g., errors discovered during interviewer training), researchers must download the updated version onto the tablets. This takes time and is difficult if the tablets are already in the field, underscoring the need for quality pretesting and piloting.

Considerations when choosing tablet hardware include cost, screen size, and battery capacity. Based on overall value (quality and cost of $250/tablet) and the fact that STG uses the Android platform and does not work with the iPad, we selected the ASUS Memo Pad 8, which has the Android 4.4 operating system, 8.00-inch, 1280 X 800 display, 1.86 GHz processor, 1 GB RAM (Figure 2). STG works on a smartphone, but researchers should consider the amount of space each question (including its response options) requires to be legible and choose a larger device as needed. Purchase costs can be spread across studies, making the investment in a set of tablets worthwhile.

Figure 2   ASUS Memo Pad 8 and fielding in Malawi. Photos: Kristen Kao.

General Infrastructure

We provided a tablet to each interviewer, with one backup per supervisor. To reduce the risk of interviewers breaking or not returning tablets, we recruited interviewers through trusted networks and included a statement in contracts about returning the tablet in good condition. Despite precautions, some tablets were broken.

Unless tablets are purchased locally, researchers must consider importation procedures. In Jordan and Malawi, we temporarily imported tablets for the duration of the survey without problems. In Tunisia, tablets were not allowed into the country without paying import duties, and the survey had to be delayed. Thus, when possible, purchasing tablets in country is best. Tablet costs can be included in the survey research organization’s quote, or grant applications can include a line item for inexpensive tablets purchased in country and left with the survey organization as part of payment.

Questionnaire Development and Programming

CAPI requires some changes in questionnaire design relative to standard paper-and-pencil surveys. In Tunisia, we finalized the questionnaire on paper and later programmed it into STG. However, by doing so, we needed to reconfigure several questionnaire sections. For instance, coding proper randomization of batteries in STG requires sections to be ordered very differently from what one sees on physical paper. We carefully considered module order in light of randomized survey experiments.

Programing mistakes are easily made. Coding randomized experiments is more difficult, especially if an experiment is embedded within another randomized question module, as was the case in Tunisia’s long and Malawi’s multilingual questionnaires. Test your questionnaire in the emulator for each skip pattern to ensure it is working correctly.

The learning curve in implementing CAPI surveys is also steep. Implementers must master the relevant coding language - STG requires knowledge of C# programming language - and the mobile survey software interface. Budget time for learning software capabilities and limitations. For instance, STG saves response code templates, so they can be copied for subsequent questions (e.g., no = 0, yes = 1).

Recommendations

We recommend research teams prepare the survey by programming the CAPI interface directly and downloading sample files regularly into Excel. The more the survey is programmed directly, the lower the error rate will be.

In STG, question variable names and response categories are automatically ordered numerically. This can lead to confusing variable names and reordering of the answer categories. At the end of survey implementation into STG, manually check all question coding or use the automatic recoding feature offered by STG. Be strategic about where it is possible for interviewers to go back in the survey. Interviewers should not be able to change answers at important points of randomization (e.g., the Kish table, experimental group assignment), and can be given a section to note mistakes.

For complex surveys, think about how earlier randomization procedures affect later ones and map out the process before fielding the survey. Double-check all coding, making sure each branch is set to filter to the proper question.

Ensure that it is safe and legal to operate GPS-equipped tablets. In countries that restrict survey activities, it may be impossible to georeference interview locations.

Sampling

When sampling is done using GPS maps, tablets reduce sampling error by ensuring the household is in the sampled area. In Malawi, electronic maps demarcating enumeration areas were obtained from the National Statistical Office and programmed into tablets.

While tablets reduce human error in implementing the Kish table, they do not prevent interviewer manipulation of sampling procedures. A dishonest interviewer may exit the survey and rerun the Kish table in order to redraw an available person. Or, the interviewer may report a refusal rather than making a return appointment, terminating the interview and moving on to an easier household.

Recommendations

STG automatically captures the GPS location and can be viewed on maps to ensure that the selected household falls within the enumeration area. However, tablets failed to capture the GPS location for 30% of Tunisian dwellings because some buildings blocked the GPS signal or the software failed. In Malawi, downloading an additional application, “MapMe,” allowed enumerators to locate themselves on a map; enumerators exit and return to the same spot on the cover page to paste the coordinates into a question created for this purpose.

Checking the GPS coordinates of interviews is important. Instances of the wrong person being interviewed can be detected through checking GPS location, and by comparing gender with gender-specific indicators. Tablets allow very quick data access while teams are still in the field, enabling researchers to watch for oddities in variable distributions. Alerting interviewers that work is being monitored encourages good work; identifying one or two dishonest interviewers early can greatly reduce measurement and sampling error. Include question time-stamps and record interview duration to improve monitoring.

Recruiting and Training

CAPI implementation requires adjusting field materials and training, including additional time to learn tablet use. The questionnaire must be downloaded onto tablets before training to allow interviewers to practice administering the survey. Because of the tablets’ cost, extra attention must be paid to concerns that enumerators could be targets for theft. As with all surveys, interviewers should never be sent to insecure areas.

Recommendations

Write supervisor and interviewer instructions on complicated aspects of the survey process for reference in the field. For example, provide general guides on tablet charging and questionnaire downloading in the field. Do not over-disclose monitoring methods. While interviewers should know the correct procedures, they should not know the precise details of data forensics. As with any survey, hold a separate supervisor training to go over supervision and monitoring practices.

Mask tablets with covers while enumerators are on the street and instruct supervisors to keep tablets overnight. Remind interviewers that they are responsible for the tablets, and ask them not to leave tablets in view in cars.

Conducting Interviews, Quality Control and Data Forensics

In Tunisia, Malawi, and Jordan, four interviewers were assigned to each supervisor to fit into one car. Supervisors charged tablets daily as loss of battery power was a major problem in the field. Each evening, supervisors ensured team members connected their tablets to the Internet and uploaded data. Supervisors checked surveys for accuracy and completeness and communicated with interviewers. The researchers remotely monitored the data. This required specific checks be prepared in advance and data to be downloaded regularly (Table 1).

Table 1  Monitoring checklist.
Before fielding
 1) Be sure to conduct the correct survey
 2) Ensure that GPS location is logging at the start of the survey
 3) Verify that skip patterns work and no questions are missing
Daily checks of interviewers’ work
 4) Check for inconsistent distributions across enumerators on some questions
 5) Check enumerators’ refusal rates and that they are returning to households where no one answered or the selected person is unavailable
 6) Compare average interview length and question duration by interviewer to pinpoint interviewers who read too quickly
Daily checks of variable distributions, sampling and randomization
 7) Review response distributions and question duration for all variables
 8) Check age, gender, ethnicity, and marital status distributions against national statistics
 9) Through GPS locating, check interviews are conducted inside sampled units
 10) Verify experiment randomization

Recommendations

Charge multiple tablets using universal electrical outlet strips and set passcodes to prevent enumerators from downloading extraneous programs that drain batteries. The survey should be programmed to stop the timer when interviews are suspended and to note when cases are closed and reopened. Set STG to require supervisors to look through a random batch of surveys before uploading. In STG, to facilitate this, different colored dots appeared next to complete and incomplete surveys.

Ethical Considerations and Conclusions

CAPI also requires attention to ethical considerations. Tablets identify interviews’ geolocation and can take photographs for coding neighborhoods’ socioeconomic characteristics, which raise confidentiality concerns. Through their sound-recording capability, tablets are useful for monitoring, training, and questionnaire development (Benstead 2017). However, this may limit respondents’ willingness to speak freely, and consent must be obtained.

Best practices also include ensuring respondent rights are covered in introductory scripts and specifying that government agencies cannot access data until identifying information for respondents or neighborhoods has been removed. Tablets offer unprecedented opportunities for survey experiments and seamless administration of showcards and audio or video prompts. With careful implementation, CAPI can be used to leverage these benefits, while minimizing errors and improving quality data.

Acknowledgements

We gratefully acknowledge the support of the Moulay Hicham Foundation, Yale University, the World Bank, and the Swedish National Research Council. We thank MAZAM interviewers, who implemented the survey in Tunisia, and Professor Boniface Dulani and the Institute for Public Opinion and Research team, who did so in Malawi. Thanks to Petter Holmgren and Wendy Wei for research assistance. Any remaining errors are the authors.


  1. The LGPI was designed by Lindsay Benstead, Pierre Landry, Ellen Lust, and Dhafer Malouche based on the Public Administration Performance Index (PAPI) conducted in Vietnam by a team that included Pierre Landry. Representative at the municipal level, the LGPI benchmarks local service delivery in education, health, security, welfare, citizen-state linkages, and corruption. The LGPI was implemented in Tunisia and Malawi by the Program on Governance and Local Development (GLD) at the University of Gothenburg and Yale University (http://gld.gu.se/). Kristen Kao and Adam Harris played an integral role in the implementation of the Malawi LGPI. The Jordanian Post-Election Survey was implemented by the Program on Governance and Local Development (GLD) at the University of Gothenburg and Yale University and conducted by Lindsay Benstead, Kristen Kao, and Ellen Lust.

References

Benstead, L.J. 2017. Survey Research in the Arab World. Edited by L.R. Atkeson and R.M. Alvarez. New York: Oxford University Press.
Google Scholar
Bethlehem, J. 2009. Applied Survey Methods: A Statistical Perspective. Hoboken, NJ: John Wiley & Sons.
Google Scholar
Caeyers, Bet, Neil Chalmers, and Joachim De Weerdt. 2012. “Improving Consumption Measurement and Other Survey Data through CAPI: Evidence from a Randomized Experiment.” Journal of Development Economics 98 (1): 19–33. https:/​/​doi.org/​10.1016/​j.jdeveco.2011.12.001.
Google Scholar
Caviglia-Harris, Jill, Simon Hall, Katrina Mulllan, Charlie Macintyre, Simone Carolina Bauch, Daniel Harris, Erin Sills, Dar Roberts, Michael Toomey, and Hoon Cha. 2011. “Improving Household Surveys Through Computer-Assisted Data Collection.” Field Methods 24 (1): 74–94. https:/​/​doi.org/​10.1177/​1525822x11399704.
Google Scholar
Couper, Mick P., and Geraldine Burt. 1994. “Interviewer Attitudes Toward Computer-Assisted Personal Interviewing (CAPI.” Social Science Computer Review 12 (1): 38–54. https:/​/​doi.org/​10.1177/​089443939401200103.
Google Scholar
Groves, R.M., F.J. Fowler, M.P. Couper, J.M. Lepkowski, E. Singer, and R. Tourangeau. 2009. Survey Methodology. 2nd ed. Hoboken, NJ: John Wiley & Sons.
Google Scholar
Kish, L. 1965. Survey Sampling. New York: John Wiley & Sons.
Google Scholar
Seligson, M., and D.E. Moreno Morales. 2016. “Improving the Quality of Survey Data Using CAPI Systems in Developing Countries.” In Oxford University Press Handbook on Polling and Polling Methods, edited by L.R. Atkeson and R.M. Alvarez. New York: Oxford University Press.
Google Scholar
SurveyToGo. 2016. 2016. http:/​/​www.dooblo.net/​stgi/​surveytogo.aspx.
Watson, Nicole, and Roger Wilkins. 2015. “Design Matters.” Field Methods 27 (3): 244–64. https:/​/​doi.org/​10.1177/​1525822x15584538.
Google Scholar

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Powered by Scholastica, the modern academic journal management system