Although survey research has witnessed a surge in the use of self-administered web and mail surveys over the past decades, interviewer-administered surveys continue to be an important part of the data collection landscape, particularly when high response rates are critical, samples are complex, and measurement is demanding. Thus, there is continued need for research concerning the interviewer’s role in both measurement and recruitment. This special issue of Survey Practice features several articles that highlight some current issues and recent findings about interviewer-respondent interaction coding (often referred to as “behavior coding”) that have immediate and practical recommendations for writing and evaluating survey questions, training interviewers, and recruiting sample members.
A primary motivation for examining interviewer-respondent interaction is provided by a model of the relationship among the characteristics of survey questions, the behavior and cognitive processing of interviewers and respondents, and the validity and reliability of survey responses (Schaeffer and Dykema 2011b). This model is based on a variety of sources including evidence of interviewer variance, that prompts the practice of training interviewers to be standardized in their behavior (see review in Schaeffer, Dykema, and Maynard 2010), and evidence that non-standardized behaviors by interviewers (such as misreading questions) and problem-indicating behaviors by respondents (such as pausing before answering, expressing uncertainty, and seeking clarification) are associated with cognitive processing or reflect conversational practices that have consequences for data quality (Schaeffer and Dykema 2011a). We study interaction in the survey interview to uncover the problems participants encounter in performing their tasks, how they attempt to surmount those obstacles, and whether, when, and which actions affect the data. As illustrated by the articles in this issue, advances in technology facilitate the ease and efficiency with which the interaction between interviewers and respondents can be recorded, transcribed, coded, and analyzed.
Three articles in the issue demonstrate how an analysis of interviewer-respondent interaction can be used to evaluate survey questions and inform questionnaire design. Pascale examines an enhanced computer audio recorder interviewing (CARI) system that facilitates recoding both telephone and in-person interviews. The system allows research staff to listen to recordings during data collection to develop codes to evaluate interviewer-respondent interaction. During the coding phase, the system can be used to simultaneously view the computerized questionnaire while listening to the administration of the questions. The CARI system was used to test alternative versions of questions for the American Community Survey. Results indicate that decomposing complex questions into simpler concepts promotes more accurate question-reading by interviewers. Pascale’s findings also have implications for training and monitoring interviewers: In-person interviewers, who traditionally have received less regular monitoring and feedback than telephone interviewers, were more likely to depart from standardized interviewing.
The papers by Dykema et al. and Holbrook et al. can be located in an emerging body of research within questionnaire design that focuses on an analysis of question characteristics provided by nonexperimental or “observational approaches.” In both papers, researchers identify and code individual item characteristics (e.g., response format, question length, sensitivity) and examine their relationship with interviewer-respondent interactional outcomes, such as question-reading accuracy for interviewers and comprehension problems for respondents, which serve as proxy indicators for data quality. Dykema et al. have a particular interest in exploring the impact of parenthetical phrases – phrases repeated from an earlier question that are enclosed in parentheses to signal to interviewers they have the option of reading or omitting the phrase. They find that while respondents are less likely to exhibit a problem when parenthetical phrases are read, interviewers are more likely to misread questions when the question includes a parenthetical phrase that is read. Findings from Holbrook et al. indicate that interviewers are more likely to read longer and harder-to-read questions inaccurately, while respondents are more likely to display comprehension problems when confronted with harder-to-read questions and specific response formats. Interestingly, interviewer reading errors do not affect comprehension or mapping problems among respondents.
Interviewers are among the most important tools survey researchers have for increasing participation. Interviewers track and locate sample members and persuade them to participate by explaining the purpose of the study, answering questions, and addressing concerns. Much can be learned about methods to effectively recruit sample members by studying interviewer-respondent interaction during recruitment. For example, in their analysis Ongena and Haan evaluate the effectiveness of a “personal” (e.g., using persuasive techniques aimed at liking) versus a “formal” (e.g., using persuasive techniques that appeal to authority or social validation) style of recruiting respondents. Contrary to expectations, their initial findings indicate that neither style is more effective. However, by coding and examining the actual behavior of their interviewers, they are able to dig deeper into the actual interactional substrate. They find that interviewers who try to convert refusals are more effective if they use any appeals than if they use none and that interviewers are likely to be most effective when they are trained to use several appeals, but allowed to be natural and spontaneous in how they administer them.
The results of Ongena and Haan remind us that the skills associated with getting sample members to participate in a survey – including flexibility and responsiveness – are in tension with the skills required by standardization during the interview – including following a prescribed set of rules, such as reading questions exactly as worded. Olson, Kirchner, and Smyth investigate how well interviewers adhere to these dual role requirements in their examination of the link between interviewers’ cooperation rates in recruiting sample members to participate in a telephone survey and the interviewers’ behavior during the administration of questions in the interview. Overall, while the results indicate few differences between interviewers with low and high cooperation rates in the manner in which they observe the rules of standardization, interviewers with high cooperation rates appear to be less disfluent during question administration. Results of Olson et al.’s analysis have theoretical implications for the possible mechanisms that link success in recruiting and in following standardization, and offer recommendations for practitioners for hiring and training new interviewers.