Introduction
About 20 years ago, scholars foresaw a rapid shift from traditional paper-and-pencil questionnaires to computer-assisted surveys, conducted with the help of mobile phones, Personal Digital Assistants (PDAs), computers or tablets. This shift was argued to be inevitable given the important advantages which computer-assisted surveys had over paper-based questionnaires, including a reduction of data collection and entry errors, more flexibility in terms of the routing of the questionnaire, and the possibility to include a range of consistency checks (Byass et al. 2008; Fletcher et al. 2003).
While computer-assisted surveys have become the norm in Western or developed settings, paper-and-pencil questionnaires remain quite common in developing countries (Caeyers, Chalmers, and Weerdt 2012). The shift to computer-assisted surveys in developing countries has been much slower because of climatological and logistical challenges, as well as respondents’ potential lack of exposure to technology (Hewett, Erulkar, and Mensch 2004). However, in the field of health-related surveys, computer-assisted personal interviewing (CAPI) has recently gained much ground in developing countries (e.g., Engelbrecht et al. 2016; Simbayi et al. 2007). So far, there is a relatively small number of studies which have examined the impact of this switch in these specific settings. Most of these studies have focused on CAPI surveys, while only a few of them have analyzed the impact of computer-assisted self-interviewing (CASI). (For exceptions, see Hewett, Erulkar, and Mensch 2004; Jaspan et al. 2007; Van de Wijgert et al. 2000).
The current paper aims to contribute to the literature on CASI surveying in developing countries by comparing a paper-and-pencil self-administered (PAP-SA) survey concerning teachers’ perceptions and attitudes toward dealing with their country’s violent past in the classroom with a tablet-based version of this survey. While the PAP-SA survey was conducted among secondary school teachers in Côte d’Ivoire, the CASI survey was conducted among secondary school teachers in Kenya. Innovatively, the surveys were administered in a group setting: teachers gathered in the staff room to respond (individually) to the survey. Group-administered surveys are a useful strategy for collecting data within institutional contexts, such as schools, because they can achieve high response rates, while generally requiring only a small number of survey administrators and supervisors (Dörnyei and Taguchi 2009).
In the next section, we briefly review the advantages of computer-assisted surveys. After providing background information on both our surveys, the paper continues by comparing the paper-based to the tablet-based survey. We then conclude.
Advantages of Computer-Assisted Surveys
The advantages of computer-assisted surveys appear to be the same in developing and developed countries and include higher item response rates and overall more complete datasets, better adherence to sampling protocols, automatic saving of location and time, and the reduction or even elimination of data recording and entry errors (e.g., Caviglia-Harris et al. 2012; Gwaltney, Shields, and Shiffman 2008; Marcano Belisario et al. 2015; Zhang et al. 2012). Moreover, survey software can detect out-of-range and ambiguous responses and offers possibilities to include complex skip patterns as well as response requests to avoid missing data (Caeyers, Chalmers, and Weerdt 2012). The immediate uploading of data also improves data security and allows for better monitoring and supervision of interviewers (Leisher 2014). Other advantages include the increased reporting of sensitive topics (e.g., Gnambs and Kaspar 2015; Hewett, Erulkar, and Mensch 2004; Jaspan et al. 2007). Although it is expensive to acquire the necessary electronic devices for a CAPI survey, these expenses tend to be recovered by the savings that are usually made in terms of data entry and cleaning (King et al. 2013; Zhang et al. 2012). Similarly, a CASI survey tends to be more cost-effective than a PAP-SA survey in cases where the electronic devices are being used for multiple surveys or for large-scale surveys (Brown, Vanable, and Eriksen 2008). Research on the impact of computer-assisted surveys on interview duration shows a mixed picture. While a small number of studies has reported significantly shorter interviews (see e.g., Caeyers, Chalmers, and Weerdt 2012; Leisher 2014), other studies report no significant change in interview duration (King et al. 2013; Zhang et al. 2012). The interview duration effects of using CASI surveying has so far not been systemically analyzed.
Research has shown that survey respondents in developing countries tend to react favorably to computer-assisted surveys, including CASI surveying. Not only do respondents in developing countries consider a CASI survey to be more user-friendly, but they also prefer it to a PAP-SA survey (Hewett, Erulkar, and Mensch 2004; Jaspan et al. 2007; Van de Wijgert et al. 2000). Unsurprisingly, the correct use of electronic devices is strongly correlated with the level of education: i.e., less educated respondents make more invalid entries than highly educated respondents (Hewett, Erulkar, and Mensch 2004; Van de Wijgert et al. 2000). Moreover, a CASI survey may at times draw suspicion from people unfamiliar with this type of data collection (Cheng et al. 2011). In rural Kenya, for example, survey researchers using laptops were accused of espionage (Mensch, Hewett, and Erulkar 2003).
The Current Study
The current paper is based on two group-administered surveys, which the authors conducted among secondary school teachers in Kenya and Côte d’Ivoire. The objective of the surveys was to determine teachers’ perceptions of their countries’ violent past as well as their attitudes and practices toward dealing with the past in their classrooms. The Ivorian survey was paper-based and was conducted in Abidjan from February to April 2015. The Kenyan survey took place in Nairobi from May to June 2016 and was conducted on tablets (Samsung S5). We used the well-known survey application Qualtrics (Qualtrics LLC, Provo, UT), which allowed for the offline collection of data. While the surveys were largely similar, where necessary, the questionnaires were amended to reflect country-specific circumstances.
The surveys clustered teachers within schools. The sampling frame was based on a list of official secondary schools in Abidjan (429) and Nairobi (258). After stratification by municipality, we sorted the list by population size of the municipality, and by number of pupils and teachers per school. From the sampling frame, we systematically selected 80 schools in Abidjan and 64 in Nairobi. Each of the selected schools was visited by the research team prior to the survey in order to request permission to conduct a survey among the school’s teachers. Permission was obtained from all the selected schools. A second visit to each of the schools was subsequently scheduled in order to conduct the actual survey. At this stage, three schools in Abidjan dropped out. Within each school, all teachers were invited to participate (2,412 in Abidjan; 1,344 in Nairobi). However, only the teachers who were present on the agreed day effectively participated in our survey. If less than one third (Abidjan) or less than half (Nairobi) of the teachers were present, we organized a second survey day. In total, 984 Ivorian and 925 Kenyan teachers participated in our surveys (i.e., response rates of respectively 0.40 and 0.69).[1] We agreed that teachers would gather in the staff room during their break. At this stage, each of them was given a paper questionnaire in the case of Côte d’Ivoire and a tablet in the case of Kenya. Only a handful of teachers refused to participate because of a heavy workload, and hence, our cooperation rates were nearly a 100% in both cases. One of the researchers was always present to provide assistance, if necessary.
The paper-based survey presented several questions per page (see questionnaire in Appendix). To enhance readability, the questions were put in bold, and response options were printed alternately on a blank/gray background. To respond, teachers had to circle the number corresponding to their response. Writing was limited to one open question and to providing a written response if the “Other”-option was selected by respondents.
The tablet survey showed one question at a time, and only important aspects of a question were put in bold (see screenshot in Appendix). Teachers could select their response by touching the screen or by typing in their response with the help of a pop-up keyboard. Both automatic routing and constructive error messages were integrated into the survey design. The survey also automatically recorded response times.
Allowing more complex designs, the tablet survey also included two vignette experiments and one list experiment. Each vignette described a hypothetical situation at school involving pupils and/or teachers (e.g., a fight between pupils or the marking of an essay). The names of the pupils mentioned in the vignettes were intended to reflect specific ethnic (4 variations) or religious (2 variations) group identities. In order to assess to what extent teachers discriminate between pupils from different ethnic backgrounds, the two vignettes were randomly allocated to different teachers. Our list experiment was aimed at determining to what extent teachers gave preferential treatment to pupils from their own ethnic group. Although hypothetically the same experiments could have been conducted in our survey in Côte d’Ivoire, it would have been a much more cumbersome procedure to prepare and properly allocate the surveys, particularly given that the surveys were printed and assembled locally.
Comparing a Paper- to a Tablet-Based Self-Interview in a Group Setting
In this section, we will compare both surveys. We will not apply statistical tests given the differences between the two sample populations and because of the contextualization of the questionnaires.
Survey Practice
In order to avoid discussions among teachers and to increase participation, teachers had to respond to the survey on the spot. Teachers were more inclined to do this with the tablet survey compared to the paper version. In the latter case, many teachers asked whether they could take the questionnaire home and complete it at a later stage. Teachers also appeared to enjoy the tablet survey considerably more. Illustratively, one teacher spontaneously confirmed during a follow-up interview: “The survey was good, it felt like my colleagues were happy. They were saying that it’s actually something that they have never seen (cf. tablet-based surveys)” (Female teacher, 33 years).[2] This enthusiasm may also partly explain why a relatively higher number of teachers in Kenya (795) was willing to participate in our follow-up research compared to Côte d’Ivoire (566). While teachers who were not familiar with the use of tablets initially required a bit of time to get used to the devices, they easily caught up after some help by the research team. By daily uploading the data to the cloud, data security was also much better in the case of Kenya.
We rented 30 tablets from a local research institute at a rate of 300KES ($2.95) a day.[3] The tablets were rented for 5 weeks, yielding a total cost of $3,657. The license to use Qualtrics was university-subsidized. The costs in Côte d’Ivoire were much lower: $257 was spent on printing the questionnaires, while the costs for sending the questionnaires back to our home university in Belgium amounted to $254. While the renting of tablets was relatively expensive, their use significantly reduced time and efforts spent on data entry and cleaning. While data entry and checking in the case of Côte d’Ivoire amounted to a total of about 8 weeks (which was conducted by three people in order to save time), in the case of Kenya, the data was uploaded daily and it only took two days to clean the data. The tablet survey did however require one week to program and test the questionnaire, and to prepare the tablets.[4]
Data Quality
Item nonresponse was low in the tablet-based survey because of the integration of automatic routing and constructive error messages. Most questions even included a forced response notification (a neutral response option was provided).[5] Item nonresponse was highest (79 respondents; 8.5%) on the open-ended question. By comparison, 177 Ivorian teachers (18%) did not respond to that question. Overall, the Ivorian data was less complete. Only 18.4% of teachers responded to all items.[6] For most teachers (58.9%), there were no more than 5 missing values; nevertheless, 70.8% of teachers had no more than 10 missing values. Item nonresponse was largely due to wrong entries, such as responding on the wrong line or circling several responses instead of one. Some teachers did skip entire pages, however.
Another major advantage of the tablet survey was the improved readability of the answers provided to the open questions. This is especially relevant for follow-up research: 566 Ivorian teachers were willing to participate in a follow-up study and hence provided their e-mail addresses. Unfortunately, 120 email addresses were basically unusable because of poor handwriting.
Interestingly, CASI also prevents respondents from a posteriori changing responses. This is particularly useful when the survey includes questions or experiments that are susceptible to a social desirability bias.
Research Opportunity
The direct uploading of data collected via CASI provides immediate access to the data. This in turn allows the researcher to conduct exploratory analyses on the spot. This is particularly interesting for multi- or mixed-method fieldwork. We used such preliminary results, for instance, in our follow-up interviews in order to gain more understanding of the observed response patterns.
Discussion and Conclusion
While computer-assisted surveys are the standard practice in developed contexts, they are only recently gaining ground in developing countries. This delayed change is largely due to researchers’ reluctance to use technological devices given the particular challenges in these settings. Yet, as in developed countries, computer-assisted surveys have also important advantages over paper-based surveys in developing countries. The current paper contributes to this literature by comparing a paper- to a tablet-survey conducted respectively in Abidjan, Côte d’Ivoire, and Nairobi, Kenya. The paper differentiates itself from earlier studies because of the way the data was collected (i.e., instead of administrating the survey individually, we conducted our survey in a group setting) and because of the topic under investigation (i.e., instead of a health-related topic, we offer an example from the social sciences).
Although the findings of this paper are limited due to its nonexperimental nature and potential differences between the two teacher populations, the results are promising and in accordance with previous studies. The design of the tablet survey basically eliminated item nonresponse, while the typing of responses to open questions also eliminated issues of poor handwriting. This is particularly important when one is collecting e-mail addresses for follow-up communication and research. Further, the automatic routing prevented teachers from a posteriori altering responses, which is of importance when a survey includes questions which are likely to be subject to social desirability effects. The tablet survey also proved more practical. It significantly reduced efforts of data entry and cleaning, and teachers appeared to be more enthusiastic to participate. Moreover, teachers who were not familiar with a tablet mastered it very quickly. Finally, data security was also improved due to the daily uploading of data to the cloud. The immediate availability of the collected data also allowed for the conduct of preliminary analyses which in turn could inform follow-up interviews.
While there are certain impracticalities associated with using tablets, including the charging of batteries or the risk of theft, the advantages clearly seem to outweigh these relatively minor inconveniences. Although this study was concerned with a group of relatively highly educated respondents, exposure to technology is rapidly increasing in the developing world, which means that the prospects for digital self-administered surveys look promising.
Acknowledgment
The authors would like to thank the Belgian Development Cooperation, and in particular VLIR-UOS, for their support for this research (grant reference 2014-001-147). VLIR-UOS supports partnerships between universities and university colleges in Flanders (Belgium) and the South looking for innovative responses to global and local challenges.
Appendix
Extract from the Ivorian questionnaire.
Extract from the Kenyan questionnaire
Response rate 1 of the American Association for Public Opinion Research guidelines.
The follow-up interviews (18) focused on content rather than on survey methodology.
Historic currency conversion.
Because of the substantial differences in price-levels between Abidjan and Nairobi, we do not compare the overall expenses of both surveys.
Requesting response is, however, a better way to avoid teachers from proceeding too fast to the next question by mistakenly pushing the ‘next’ button.
Excluding specification of the response option ‘Other’, as well as the open-ended question.