Introduction and Background
A good deal of survey research shows that interviewer characteristics can generate meaningful differences in responses. Studies of interviewer effects have in common two elements. First, they conclude that characteristics of interviewers (e.g., race, gender) are most likely to affect survey questions that are related to that interviewer characteristic (Hatchett and Schuman 1975; Marsden and Wright 2010; Sudman and Bradburn 1974). For example, the race of the interviewer (ROI) influences racial attitude items (D. Davis 1997b, 1997a; Dawson 2001).
Second, the preponderance of the race of interviewer literature excludes from the analysis respondents who say they “don’t know” the race of their interviewer (item non-response). This is problematic because extant research finds that upward of 30 percent of White respondents do not answer the question, “What do you think is my racial background?” (D. W. Davis and Silver 2003). The absence in the literature of an analysis that involves up to one-third of White respondents is surprising given that many scholars have discovered item non-response can bias survey responses (Bradburn and Sudman 1988; Gilljam and Granberg 1993; J. A. Krosnick 1991; J. Krosnick et al. 2002).
In this research note, we build on the ROI literature by examining whether “don’t know” responses to an interviewer’s question, “What is my race?” affect White respondents’ answers to racial attitude questions about African Americans. Previous work has tended to exclude telephone respondents who answer ROI queries with “don’t know,” and we argue that this exclusion may lead to omitted-variable bias when analyzing racial attitude items. We find that White respondents who “don’t know” the ROI generally express more racially liberal views than respondents who perceive their interviewer to be White. Indeed, their responses are not statistically different from respondents who perceive their interviewer to be Black. Additional analysis reveals that ROI non-response among Whites is associated with being interviewed by a Black caller and refusal to answer questions about income. In summary, our findings suggest that including “don’t know” responses to perceived ROI queries can offer a minor, but not trivial improvement to statistical models of racial attitudes.
In the next section, we discuss our data, followed by our results, and conclude with the study’s implications for minimizing type I and type II errors in hypothesis testing related to racial attitudes. We also conclude with some thoughts on the best practices regarding surveying racial attitude items.
Data and Methods
We use telephone survey data of Washington State registered voters from October 2008 with a racially diverse set of interviewers. We completed 872 interviews including 615 among White, non-Hispanic respondents, who are the focus of this.[1] We use only the sample of White respondents, because Whites are 90 percent of our Washington State sample, and following Berinsky (2004), we assume that Blacks and Whites are subject to different types of social pressures in the survey interview experience. We further subset our data to include respondents who had only a White or Black interviewer and to respondents who perceived their interviewer to be White, Black, or chose the “Don’t Know/Refused” option. This results in a dataset with 357 interviews.
Fitting with previous findings, 31 percent of our respondents said “don’t know” to the ROI question, and about 6 percent of the respondents refused to answer. Grouping these two choices together, fully 37 percent of our respondents fail to answer the ROI query. Eight percent of respondents said the interviewer was “Black,” and 56 percent said “White.” Of the times that White respondents were interviewed by a White, (152/226) 67.3 percent correctly indicated that their interviewer was White, less than 1 percent indicated that their interviewer was African American, and 31.8 percent said “don’t know” or refused to answer the question. Among Whites who were interviewed by an African American, (26/131) 19.8 percent correctly indicated that their interviewer was Black, but interestingly, 35.1 percent indicated incorrectly that their interviewer was White, and 45 percent said “don’t know” or refused. Another way to think about this is that 76.7 percent of White respondents who perceived their interviewer to be White were correct in their perception, whereas 92.8 percent of Whites who said that their interviewer was Black were correct. In summary, many respondents express some doubts as to the race of their interviewer, and even, at times, misconceive the race of their interviewer.
To unpack this, we conduct two analyses. First, we assess the impact of interviewer effects across three different racial attitude items. Second, we model ROI non-response to help explain our initial findings. The dependent variables measuring racial attitudes are:
- If Blacks would only try harder, they would be just as well off as Whites.
- Blacks are more violent than Whites.
- Most Blacks who are on welfare programs could get a job if they really tried.
Each variable is coded from 0 to 3, where 0=strongly disagree and 3=strongly agree. We anticipate finding significant ROI differences on the racial attitudes questions because perceived ROI affects how respondents evaluate questions that are clearly connected to race. We also include standard control variables including age, education level, household income, gender, party, and ideology (see Appendix). We present three ordered logit regression models for each racial attitude item. We also present a logistic regression model to examine the predictors of ROI item non-response. Here, we also include an indicator for actual Black interviewer, as we expect that the actual race of the interviewer will influence a respondent’s willingness to answer ROI.
The Findings
The first model is what we call the traditional model, where ROI items are not included. The second model – the traditional ROI model – includes a dummy indicator for ROI perceived White (ROI perceived Black is the comparison category) but drops respondents who do not respond to the ROI query. The final model includes dummy indicators for ROI perceived White, ROI perceived Black, and treats ROI “don’t know” as the comparison category. Thus, interpretation of these coefficients is relative to “don’t know” ROI respondents.
Table 1 reports ordered logit results for these three models where the dependent variable is “If Blacks would only try harder, they would be just as well off as Whites.” First, ideology, gender, age, education, low income, and Republican are all statistically significant and in the expected direction. Moving beyond these traditional predictors, two findings are relevant to the ROI discussion. First, the AIC model fit score for the ROI “don’t know” model (model 3) is lower than the model fit for the traditional model (model 1), indicating that the former is a slightly better fitted model. Second, while we cannot compare AIC scores for models two and three because they are not nested, there appears to be no ROI effect for model 2 – the coefficient for perceived White is statistically insignificant suggesting no relationship exists between ROI and the racial attitude. However, model 3 indicates that perceived White is statistically significant, whereas perceived Black is not. This indicates that White respondents who perceived their interviewer to be White give more racially conservative responses relative to their “don’t know” counterparts. There is no difference between “don’t know” and perceived Black respondents. While we cannot say for sure, it appears that Whites who say “don’t know” may well hunch that the interviewer is Black, which is why ROI perceived Black is statistically no different from ROI “don’t know.”
The next racial attitude item is “Blacks are more violent than Whites.” Table 2 reports the results, which are similar to the overall findings presented in Table 1. The AIC model fit value is lower for the alternative ROI model (model 3) compared to the traditional model (model 1). However, the perceived White variable is statistically significant in the traditional ROI model indicating that respondents who perceive their interviewer to be White answer more conservatively than those who perceive their respondents to be Black. Turning to model 3, the perceived White coefficient is once again significant, but perceived Black is not. Again, model 3 demonstrates that “don’t know” respondents do not express racial attitudes at odds with respondents who perceived their interviewers to be Black.
Table 3 shows the final racial attitude we model, which is “Most Blacks who are on welfare programs could get a job if they really tried.” With respect to model fit, this item proves the exception, as the AIC indicates that the perceived ROI indicators do not improve statistical fit. However, compared to the traditional ROI model (model 2), the alternative ROI specification indicates a statistically significant relationship between ROI and racial attitudes. This is essentially the same as our findings from our first set of models. In terms of adequately specifying the link between perceived ROI and racial attitudes, the traditional model fails to capture the ROI dynamics revealed by the full specification of model 3, leading to a type II error. Indeed, under models 1 and 2 we would conclude that no ROI effects exist on this racial attitude.
Finally, we present logistic regression results in Table 4, which model respondents’ propensity to give an ROI item non-response. Two significant indicators emerge. First, respondents with a Black interviewer are much more likely to give a non-response compared to their counterparts with a White interviewer. This supports the notion that “don’t know” respondents may well have suspected their interviewers were Black but hesitated to say so. Second, there is marginal evidence that respondents who fail to report their annual income are more likely to give an ROI item non-response (p-value=0.10). Taken together, these results suggest that ROI item non-response is not random and in fact driven by characteristics of both the respondent (personal caution as measured by their refusal to report their income) and the interviewer (their race).
Discussion
Scholars of public opinion have long known that social desirability, unit and item non-response bias, and interviewer effects can influence survey data. But no one has satisfactorily answered whether “don’t know” responses to race of interviewer questions bias answers to racial attitude items. We find that 37 percent of our respondents fail to state the race of their interviewer, indicating a widespread consternation among respondents on this sensitive question. In general, when modeling racial attitudes, inclusion of ROI “don’t know” respondents tends to improve model fit, although this may vary based on the content of the racial attitude item. Researchers should therefore check for ROI effects (including “don’t know” responses) when they analyze racial attitudes.
Another finding is that perceived ROI has no statistical relationship to racial attitudes in two out of the three items we measured under the traditional ROI model where “don’t know” respondents are dropped. Thus, dropping “don’t know” respondents may lead to type II statistical errors. Indeed, ROI becomes statistically related to racial attitude items only when ROI “don’t know” respondents are included. Finally, we modeled ROI item non-response and discovered that ROI item non-response is not random and in fact driven by characteristics of both the respondent (personal caution as measured by their refusal to report their income) and the interviewer (their race). In the final analysis, it appears that the presence of a Black interviewer may lead already cautious/private respondents to hedge their answers on racial attitude items thereby enhancing social-desirability bias. This problem is largely corrected by including ROI “don’t know” responses when modeling racial attitudes.
In terms of survey management, researchers studying race and ethnicity should do their best to match the race of the interviewer with the race of the respondent. Of course, this is not always possible, but the extent that it is fewer ROI effects should be seen. Second, researchers should always include race of interviewer questions, but in order to minimize the uncertainty associated with “don’t know” ROI responses, follow-up prompts to the initial ROI query should be included in the survey. Finally, analyses of racial attitudes should not simply drop “don’t know” ROI respondents but should include them as a comparison group in the analysis.
Appendix
Variable coding
Dependent variables:
- If Blacks would only try harder, they would be just as well off as Whites.
- Blacks are more violent than Whites.
- Most Blacks who are on welfare programs could get a job if they really tried.
Each variable is coded from 0 to 3, where 0=strongly disagree and 3=strongly agree.
Independent variables:
ROI Perceived Black 0=R did not perceive I to be Black, 1=R perceived I to be Black
ROI Perceived White 0=R did not perceive I to be White, 1=R perceived I to be White
ROI Perceived DK 0=R perceives I to be White or Black, 1=R answers DK (OMITTED CATEGORY)
Sample
The sampling frame is all Washington State registered voters prior to the 2008 general election. A sample of registered voters was drawn from the Washington State voter file via random stratified sampling. The survey was conducted between October 20 and November 4, 2008, with a response rate of 19.3 (using AAPOR RR4 definition).
Dependent variables question wording
The next statements are about life in America today. As I read each one, please tell me whether you strongly agree, somewhat agree, somewhat disagree, or strongly disagree.
- If Blacks would only try harder, they would be just as well off as Whites.
- Blacks are more violent than Whites.
- Most Blacks who are on welfare programs could get a job if they really tried.
Independent variables question wording
And finally, what is my race?
(DON’T ASK) Gender
What is the highest level of education you completed? Just stop me when I read the correct category – Grades 1–8, some high school, high school graduate, some college or technical school, college graduate, or post-graduate.
Generally speaking, do you think of yourself as a Democrat, a Republican, an independent, or what?
When it comes to politics, do you usually think of yourself as a Liberal, a Conservative, a Moderate, or haven’t you thought much about this?
In what year were you born?
What was your total combined household income in 2007 before taxes. This question is completely confidential and just used to help classify the responses. Just stop me when I read the correct category. Less than $20,000; $20,000 to less than $40,000; $40,000 to less than $60,000; $60,000 to less than $80,000; $80,000 to less than $100,000; $100,000 to less than $150,000; More than $150,000.
We also had an oversample of 197 African-American respondents; however, the cell sizes became too small for in-depth analysis.