Using Tablet Computers to Implement Surveys in Challenging Environments

Lindsay J. Benstead Associate Professor of Political Science, Portland State University
Kuwait Visiting Professor, Sciences Po, Paris; Affiliated Scholar
Program on Governance and Local Development, University of Gothenburg
Contributing Scholar, Program on Women’s Rights in the Middle East, Baker School of Public Policy, Rice University

Kristen Kao Post-Doctoral Research Fellow, Program on Governance and Local Development, University of Gothenburg

Pierre F. Landry Professor of Political Science, Director of Global China Studies, New York University
Shanghai and Research Fellow, Research Center for Contemporary China, Peking University

Ellen M. Lust Professor of Political Science, University of Gothenburg
Director, Program on Governance and Local Development, University of Gothenburg and Yale University
Non-Resident Senior Fellow, Project on Middle East Democracy, Washington, DC

Dhafer Malouche Associate Professor of Statistics, Ecole Supérieure de la Statistique et de l’Analyse de l’Information, University of Carthage, Tunisia

Abstract

Computer-assisted personal interviewing (CAPI) has increasingly been used in developing countries, but literature and training on best practices have not kept pace. Drawing on our experiences using CAPI to implement the Local Governance Performance Index (LGPI) in Tunisia and Malawi and an election study in Jordan, this paper makes practical recommendations for mitigating challenges and leveraging CAPI’s benefits to obtain high quality data. CAPI offers several advantages. Tablets facilitate complex skip patterns and randomization of long question batteries and survey experiments, which helps to reduce measurement error. Tablets’ global positioning system (GPS) technology reduces sampling error by locating sampling units and facilitating analysis of neighborhood effects. Immediate data uploading, time-stamps for individual questions, and interview duration capture allowed real time data quality checks and interviewer monitoring. Yet, CAPI entails challenges, including costs of learning new software; questionnaire programming; and piloting to resolve coding bugs; and ethical and logistical considerations, such as electricity and Internet connectivity.

Introduction

Computer-assisted personal interviewing (CAPI) using smartphones, laptops, and tablets has long been standard in western countries. Since the 1990s, CAPI has also been used in developing countries (Bethlehem 2009, 156–160) and is increasingly utilized in developing countries as well.

Unlike standard paper and pencil interviewing, where the interviewer records responses on paper and manually codes them into a computer, CAPI allows the interviewer to record answers directly onto a digital device. When used skillfully, CAPI reduces coding errors, particularly those arising from skip patterns, and decreases the time needed to produce results. Yet, challenges must be navigated to leverage CAPI’s benefits and avoid mistakes.

The Program on Governance and Local Development (GLD) used tablets to administer the Local Governance Performance Index (LGPI) in Tunisia and Malawi and an electoral study in Jordan – three challenging contexts with poor infrastructure.1 CAPI offered advantages vis-à-vis earlier surveys conducted by GLD team members in Morocco, Algeria, Tunisia, Libya, Egypt, and China. CAPI facilitated a long, complicated questionnaire and random assignment of modules, greatly reducing implementation errors. Randomized response options eliminated primacy and recency effects, while CAPI enforced rules, such as forcing a response entry before continuing, which decreased item nonresponse. The ability to program the Kish table to automatically select a participant greatly eased respondent selection and possibly reduced sampling error (Kish 1965). Daily data uploading permitted quick identification of implementation, sampling, randomization, and interviewer compliance problems. It also eliminated data entry, which is costly and error-prone. However, to take advantage of CAPI’s benefits, researchers must recognize a number of pitfalls and implement practices to avoid them.

Literature Review

Existing literature offers mixed evidence with regard to CAPI’s impact on survey estimates and data quality. Caeyers et al. (2012) find, for instance, that CAPI eliminates mistakes arising from skip patterns and impacts estimates of key variables. While respondents and interviewers favor CAPI (Couper and Burt 1994), respondent participation and attrition are not affected by it. At the same time, CAPI’s impact on missing data is unclear and context dependent. Watson and Wilkins (2015) find that CAPI increases interview length.

Existing literature examining how CAPI affects survey quality has several limitations. First, it is small and should be expanded to take into account how CAPI’s effects on survey quality vary across survey type, hardware/software, or other factors. Second, little research focuses on how CAPI affects data quality or respondent-interviewer experience and interactions in developing countries, which may differ from developed countries in ways that have not yet been fully explored (e.g., Seligson and Moreno Morales 2016). For example, rates of item-missing data, for some items in developing countries (Benstead 2017), might be reduced by using CAPI. Third, with few exceptions, textbooks do not offer concrete tips on how to implement CAPI (Caviglia-Harris et al. 2012; Groves et al. 2009).

This article provides guidance on choices and considerations made while implementing surveys in Tunisia, Malawi, and Jordan. It proceeds by detailing the survey process, assessing tablets’ effects on implementation and data quality, and offering recommendations for using CAPI effectively. Finally, it considers ethical issues raised by CAPI.

The Tablets and Software

Using CAPI begins with selecting software based on script capabilities and cost. A website such as this one – http://www.capterra.com/survey-software/ – lists software options and can be a helpful place to begin. We selected SurveyToGo (STG 2016) because it has a UTF/Unicode compliant interface, allowing for multilingual and multiscript questionnaires, including Arabic. STG charges by the number of observations recorded (i.e., responses to any given question).

STG has other advantages as well. It allows branching, skipping, looping, validation, piping, and randomizing questions, answers, and chapters. Its desktop emulator (Figure 1) allows researchers working online to see how the questionnaire will appear, but some features, such as randomization, do not work in the emulator. Since the emulator is hosted online, the team can test it from anywhere, but only one person can edit the questionnaire in the emulator at any given time.

Figure 1  The survey is designed and monitored using the STG interface.


SP-Vol 10_Mar_Benstead_fig1.jpg


Once the survey is developed in STG, it is downloaded onto tablets. If changes are needed later (e.g., errors discovered during interviewer training), researchers must download the updated version onto the tablets. This takes time and is difficult if the tablets are already in the field, underscoring the need for quality pretesting and piloting.

Considerations when choosing tablet hardware include cost, screen size, and battery capacity. Based on overall value (quality and cost of $250/tablet) and the fact that STG uses the Android platform and does not work with the iPad, we selected the ASUS Memo Pad 8, which has the Android 4.4 operating system, 8.00-inch, 1280 × 800 display, 1.86 GHz processor, 1 GB RAM (Figure 2). STG works on a smartphone, but researchers should consider the amount of space each question (including its response options) requires to be legible and choose a larger device as needed. Purchase costs can be spread across studies, making the investment in a set of tablets worthwhile.

Figure 2  ASUS Memo Pad 8 and fielding in Malawi. Photos: Kristen Kao.


SP-Vol 10_Mar_Benstead_fig2.jpg


General Infrastructure

We provided a tablet to each interviewer, with one backup per supervisor. To reduce the risk of interviewers breaking or not returning tablets, we recruited interviewers through trusted networks and included a statement in contracts about returning the tablet in good condition. Despite precautions, some tablets were broken.

Unless tablets are purchased locally, researchers must consider importation procedures. In Jordan and Malawi, we temporarily imported tablets for the duration of the survey without problems. In Tunisia, tablets were not allowed into the country without paying import duties, and the survey had to be delayed. Thus, when possible, purchasing tablets in country is best. Tablet costs can be included in the survey research organization’s quote, or grant applications can include a line item for inexpensive tablets purchased in country and left with the survey organization as part of payment.

Questionnaire Development and Programming

CAPI requires some changes in questionnaire design relative to standard paper-and-pencil surveys. In Tunisia, we finalized the questionnaire on paper and later programmed it into STG. However, by doing so, we needed to reconfigure several questionnaire sections. For instance, coding proper randomization of batteries in STG requires sections to be ordered very differently from what one sees on physical paper. We carefully considered module order in light of randomized survey experiments.

Programing mistakes are easily made. Coding randomized experiments is more difficult, especially if an experiment is embedded within another randomized question module, as was the case in Tunisia’s long and Malawi’s multilingual questionnaires. Test your questionnaire in the emulator for each skip pattern to ensure it is working correctly.

The learning curve in implementing CAPI surveys is also steep. Implementers must master the relevant coding language – STG requires knowledge of C# programming language – and the mobile survey software interface. Budget time for learning software capabilities and limitations. For instance, STG saves response code templates, so they can be copied for subsequent questions (e.g., no = 0, yes = 1).

Recommendations

We recommend research teams prepare the survey by programming the CAPI interface directly and downloading sample files regularly into Excel. The more the survey is programmed directly, the lower the error rate will be.

In STG, question variable names and response categories are automatically ordered numerically. This can lead to confusing variable names and reordering of the answer categories. At the end of survey implementation into STG, manually check all question coding or use the automatic recoding feature offered by STG. Be strategic about where it is possible for interviewers to go back in the survey. Interviewers should not be able to change answers at important points of randomization (e.g., the Kish table, experimental group assignment), and can be given a section to note mistakes.

For complex surveys, think about how earlier randomization procedures affect later ones and map out the process before fielding the survey. Double-check all coding, making sure each branch is set to filter to the proper question.

Ensure that it is safe and legal to operate GPS-equipped tablets. In countries that restrict survey activities, it may be impossible to georeference interview locations.

Sampling

When sampling is done using GPS maps, tablets reduce sampling error by ensuring the household is in the sampled area. In Malawi, electronic maps demarcating enumeration areas were obtained from the National Statistical Office and programmed into tablets.

While tablets reduce human error in implementing the Kish table, they do not prevent interviewer manipulation of sampling procedures. A dishonest interviewer may exit the survey and rerun the Kish table in order to redraw an available person. Or, the interviewer may report a refusal rather than making a return appointment, terminating the interview and moving on to an easier household.

Recommendations

STG automatically captures the GPS location and can be viewed on maps to ensure that the selected household falls within the enumeration area. However, tablets failed to capture the GPS location for 30% of Tunisian dwellings because some buildings blocked the GPS signal or the software failed. In Malawi, downloading an additional application, “MapMe,” allowed enumerators to locate themselves on a map; enumerators exit and return to the same spot on the cover page to paste the coordinates into a question created for this purpose.

Checking the GPS coordinates of interviews is important. Instances of the wrong person being interviewed can be detected through checking GPS location, and by comparing gender with gender-specific indicators. Tablets allow very quick data access while teams are still in the field, enabling researchers to watch for oddities in variable distributions. Alerting interviewers that work is being monitored encourages good work; identifying one or two dishonest interviewers early can greatly reduce measurement and sampling error. Include question time-stamps and record interview duration to improve monitoring.

Recruiting and Training

CAPI implementation requires adjusting field materials and training, including additional time to learn tablet use. The questionnaire must be downloaded onto tablets before training to allow interviewers to practice administering the survey. Because of the tablets’ cost, extra attention must be paid to concerns that enumerators could be targets for theft. As with all surveys, interviewers should never be sent to insecure areas.

Recommendations

Write supervisor and interviewer instructions on complicated aspects of the survey process for reference in the field. For example, provide general guides on tablet charging and questionnaire downloading in the field. Do not over-disclose monitoring methods. While interviewers should know the correct procedures, they should not know the precise details of data forensics. As with any survey, hold a separate supervisor training to go over supervision and monitoring practices.

Mask tablets with covers while enumerators are on the street and instruct supervisors to keep tablets overnight. Remind interviewers that they are responsible for the tablets, and ask them not to leave tablets in view in cars.

Conducting Interviews, Quality Control and Data Forensics

In Tunisia, Malawi, and Jordan, four interviewers were assigned to each supervisor to fit into one car. Supervisors charged tablets daily as loss of battery power was a major problem in the field. Each evening, supervisors ensured team members connected their tablets to the Internet and uploaded data. Supervisors checked surveys for accuracy and completeness and communicated with interviewers. The researchers remotely monitored the data. This required specific checks be prepared in advance and data to be downloaded regularly (Table 1).

Table 1 Monitoring checklist.

Before fielding
 1) Be sure to conduct the correct survey
 2) Ensure that GPS location is logging at the start of the survey
 3) Verify that skip patterns work and no questions are missing
Daily checks of interviewers’ work
 4) Check for inconsistent distributions across enumerators on some questions
 5) Check enumerators’ refusal rates and that they are returning to households where no one answered or the selected person is unavailable
 6) Compare average interview length and question duration by interviewer to pinpoint interviewers who read too quickly
Daily checks of variable distributions, sampling and randomization
 7) Review response distributions and question duration for all variables
 8) Check age, gender, ethnicity, and marital status distributions against national statistics
 9) Through GPS locating, check interviews are conducted inside sampled units
 10) Verify experiment randomization

Recommendations

Charge multiple tablets using universal electrical outlet strips and set passcodes to prevent enumerators from downloading extraneous programs that drain batteries. The survey should be programmed to stop the timer when interviews are suspended and to note when cases are closed and reopened. Set STG to require supervisors to look through a random batch of surveys before uploading. In STG, to facilitate this, different colored dots appeared next to complete and incomplete surveys.

Ethical Considerations and Conclusions

CAPI also requires attention to ethical considerations. Tablets identify interviews’ geolocation and can take photographs for coding neighborhoods’ socioeconomic characteristics, which raise confidentiality concerns. Through their sound-recording capability, tablets are useful for monitoring, training, and questionnaire development (Benstead 2017). However, this may limit respondents’ willingness to speak freely, and consent must be obtained.

Best practices also include ensuring respondent rights are covered in introductory scripts and specifying that government agencies cannot access data until identifying information for respondents or neighborhoods has been removed. Tablets offer unprecedented opportunities for survey experiments and seamless administration of showcards and audio or video prompts. With careful implementation, CAPI can be used to leverage these benefits, while minimizing errors and improving quality data.

Acknowledgements

We gratefully acknowledge the support of the Moulay Hicham Foundation, Yale University, the World Bank, and the Swedish National Research Council. We thank MAZAM interviewers, who implemented the survey in Tunisia, and Professor Boniface Dulani and the Institute for Public Opinion and Research team, who did so in Malawi. Thanks to Petter Holmgren and Wendy Wei for research assistance. Any remaining errors are the authors.

References

Benstead 2017
Benstead, L.J. 2017. Survey research in the Arab world. In: (L.R. Atkeson and R.M. Alvarez, eds.) Oxford University Press handbook on polling and polling methods. Oxford University Press, New York.
Bethlehem 2009
Bethlehem, J. 2009. Applied survey methods: a statistical perspective. John Wiley & Sons, Hoboken, NJ.
Caeyers et al. 2012
Caeyers, B., N. Chalmers and J. De Weerdt. 2012. Improving consumption measurement and other survey data through CAPI: evidence from a randomized experiment. Journal of Development Economics 98(1): 19–33. http://doi.org/10.1016/j.jdeveco.2011.12.001.
Caviglia-Harris et al. 2012
Caviglia-Harris, J., S. Hall, K. Mulllan, C. Macintyre, S.C. Bauch, D. Harris, E. Sills, D. Roberts, M. Toomey and H. Cha. 2012. Improving household surveys through computer-assisted data collection use of touch-screen laptops in challenging environments. Field Methods 24(1): 74–94. http://doi.org/10.1177/1525822X11399704.
Couper and Burt 1994
Couper, M.P. and G. Burt. 1994. Interviewer attitudes toward computer-assisted personal interviewing (CAPI). Social Science Computer Review 12(1): 38–54. http://doi.org/10.1177/089443939401200103.
Groves et al. 2009
Groves, R. M., F.J. Fowler Jr, M.P. Couper, J.M. Lepkowski, E. Singer and R. Tourangeau. 2009. Survey methodology (2nd ed.). John Wiley & Sons, Hoboken, NJ.
Kish 1965
Kish, L. 1965. Survey sampling. John Wiley & Sons, New York.
Seligson and Moreno Morales 2016
Seligson, M. and D.E. Moreno Morales. 2016. Improving the quality of survey data using CAPI systems in developing countries. In: (L.R. Atkeson and R.M. Alvarez, eds.) Oxford University Press handbook on polling and polling methods. Oxford University Press, New York.
SurveyToGo 2016
Watson and Wilkins 2015
Watson, N. and R. Wilkins. 2015. Design matters: the impact of CAPI on interview length. Field Methods 27(3): 244–264. http://doi.org/10.1177/1525822X15584538.
Footnote
1 The LGPI was designed by Lindsay Benstead, Pierre Landry, Ellen Lust, and Dhafer Malouche based on the Public Administration Performance Index (PAPI) conducted in Vietnam by a team that included Pierre Landry. Representative at the municipal level, the LGPI benchmarks local service delivery in education, health, security, welfare, citizen-state linkages, and corruption. The LGPI was implemented in Tunisia and Malawi by the Program on Governance and Local Development (GLD) at the University of Gothenburg and Yale University (http://gld.gu.se/). Kristen Kao and Adam Harris played an integral role in the implementation of the Malawi LGPI. The Jordanian Post-Election Survey was implemented by the Program on Governance and Local Development (GLD) at the University of Gothenburg and Yale University and conducted by Lindsay Benstead, Kristen Kao, and Ellen Lust.


About Survey Practice Our Global Partners Disclaimer
The Survey Practice content may not be distributed, used, adapted, reproduced, translated or copied for any commercial purpose in any form without prior permission of the publisher. Any use of this e-journal in whole or in part, must include the customary bibliographic citation and its URL.