Loading [Contrib]/a11y/accessibility-menu.js
Skip to main content
Survey Practice
  • Menu
  • Articles
    • Articles
    • Editor Notes
    • In-Brief Notes
    • Interview the Expert
    • Recent Books, Papers, and Presentations
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • Blog
  • Subscribe
  • search

RSS Feed

Enter the URL below into your favorite RSS reader.

http://localhost:28157/feed
Articles
Vol. 16, Issue 1, 2023November 09, 2023 EDT

Using interpretive question evaluation methods to assess race and ethnicity items

Zachary Smith, Valerie Ryan, Stephanie Willson,
RaceEthnicityUS CensusSelf-identification
https://doi.org/10.29115/SP-2023-0026
Photo by Priscilla Du Preez 🇨🇦 on Unsplash
Survey Practice
Smith, Zachary, Valerie Ryan, and Stephanie Willson. 2023. “Using Interpretive Question Evaluation Methods to Assess Race and Ethnicity Items.” Survey Practice 16 (1). https:/​/​doi.org/​10.29115/​SP-2023-0026.
Save article as...▾

View more stats

Abstract

Recently, the Office of the Chief Statistician announced recommended revisions to the Office of Management and Budget’s (OMB) Statistical Policy Directive No. 15: Standards for Maintaining, Collecting, and Presenting Federal Data on Race and Ethnicity (SPD 15). In particular, the revisions recommend combining race and ethnicity into a single, select-all-that-apply question, including “Middle Eastern and North African” (MENA) as a new minimum category within the item, and collecting detailed race and ethnicity data in the same item (Office of Management and Budget 2023). This short piece addresses how existing interpretive approaches to question evaluation (Miller et al. 2014) assist in assessing construct validity in the context of the OMB recommended revisions. In so doing, these question evaluation methods can help answer how race and ethnicity, as constructs, can be effectively measured. While these changes present challenges, a focus on the meaning of race and ethnicity in respondents’ lives provides needed context to survey statistics, enhances survey question design, and improves the equity of data collected.

Recently, the Office of the Chief Statistician announced recommended revisions to the Office of Management and Budget’s (OMB) Statistical Policy Directive No. 15: Standards for Maintaining, Collecting, and Presenting Federal Data on Race and Ethnicity (SPD 15). In particular, the revisions recommend combining race and ethnicity into a single, select-all-that-apply question, including “Middle Eastern and North African” (MENA) as a new minimum category within the item, and collecting detailed race and ethnicity data in the same item (Office of Management and Budget 2023). This short piece addresses how existing interpretive approaches to question evaluation (Miller et al. 2014) assist in assessing construct validity in the context of the OMB recommended revisions. In so doing, these question evaluation methods can help answer how race and ethnicity, as constructs, can be effectively measured. While these changes present challenges, a focus on the meaning of race and ethnicity in respondents’ lives provides needed context to survey statistics, enhances survey question design, and improves the equity of data collected.

Two types of question evaluation have predominated in the federal statistical system. Each places emphasis on different aspects of the question-response process and, when used together, serve as complementary perspectives. The first, traditional approach to questionnaire design, emerged from the 1980’s Cognitive Aspects of Survey Methodology (CASM) movement and uses the methodological toolkit of cognitive psychology. It attempts to identify “problems” with survey items that could impact the accuracy and reliability of data collection (Willis 2015). This approach, which situates the question-response process (Tourangeau 1984) solely within the domain of individual cognitive processes, has the advantage of aligning neatly with survey methodology writ large. The second, an interpretive approach drawing on cognitive sociology, seeks to understand the “ways in which respondents interpret questions and apply those questions to their own lives, experiences, and perceptions” (Miller 2014). In doing so, it views respondents’ cognitive processing of survey questions as fundamentally informed by and inseparable from their lived experiences (Gerber and Wellens 1997; Miller 2003). From this perspective, improving question performance is impossible without considering respondents’ social location.

Particularly relevant to the proposed revisions to the race/ethnicity question are these approaches’ divergent perspectives on question intent. The traditional cognitive-psychological approach assesses item performance through structured probing of the phases of the question-response process: comprehension, recall, judgment, and response (Tourangeau 1984). This approach measures question quality by the number of problems identified. As in golf, the lower the number of problems, the better the question. This method’s ultimate goal is to (quantitatively) examine how closely data gathered match researcher intent. The interpretive approach, by contrast, is more agnostic to researcher intent. Instead, it seeks to identify “what [a question] captures” (Boeije and Willis 2013) without making assumptions about “correct” or “incorrect” interpretations (i.e., response error). The interpretive approach assumes that respondents understand and process questions through their lived experience and relate this to researchers in narrative form—in other words, respondents do not report on their cognitive processes but their social reality. In the process of employing both the cognitive-psychological and interpretive approaches, researchers gain a sense of potential and actual “problems” with questions and, more crucially, the meanings that respondents attach to key constructs and categories. In the context of the proposed revisions, the interpretive approach shifts the focus from the question itself to the broader constructs of race and ethnicity as they operate—and if they operate—in respondents’ social worlds.

The proposed revisions have reopened debate on whether race and ethnicity are worth measuring using standardized categories, or even measuring at all, as some respondents to the OMB public comment process proposed.[1] Strictly speaking, question evaluation methods cannot answer whether topics should be measured—they can only assess how questions function for respondents. Both interpretive and cognitive-psychological approaches to question evaluation will illuminate whether race and ethnicity are constructs that make sense to people and are ways they can categorize themselves, if only on forms and in surveys. A longstanding research agenda examining race and ethnicity questions has indicated that race and ethnicity structure substantial aspects of public life in the United States (Miller and Willson 2002; Willson and Dunston 2017). This is not to say that race and ethnicity are indicative of biological difference; rather, they are best understood as “socio-political constructs” (Office of Management and Budget 2023). The response options—both the “minimum categories” measured by SPD 15 and subcategories—seek to operationalize these constructs. Importantly, as social and personal understandings of race and ethnicity evolve, inclusion of new categories as response options can positively impact data quality for smaller groups without meaningful reductions in data quality for non-group members (see also U.S. Department of Commerce 2017). Thus, question evaluation of the addition of a new category for MENA respondents can illuminate the degree to which race and ethnicity remain consistent, predictable, and measurable across time and space.[2]

Question evaluation methods provide insight on how to align question design with actual constructs of interest. While both cognitive-psychological and interpretive perspectives should be employed, an interpretive approach to question evaluation is uniquely suited to assessing the measurement of identity categories, such as race and ethnicity. This is because the interpretive approach frames question evaluation to ask how constructions of race and ethnicity impact respondents’ lives. For example, prior evaluation of the single-question combined measure of race and ethnicity, which included a “Middle Eastern and North African” category and ethnicity sub-categories, identified four primary patterns of interpretation of race and ethnicity among respondents: as ancestry (genealogy, otherwise known as a person’s family tree), as cultural affinity or belonging (connectedness to a group based on shared culture), as an administrative category with varying responses depending on question purpose (response depends on the type of form, such as a medical form or driver’s license), and as a function of others’ perception of respondents in society (how respondents’ race is viewed by others in the United States) (Willson and Dunston 2017). These patterns remained consistent with the results of early evaluation of the select-all-that-apply approach to asking about race (Miller and Willson 2002), and though the prevalence of these patterns is not known, none of the patterns indicate response error. A key benefit to this approach to question evaluation is that it can closely link research on race and ethnicity measures with social scientific research on the socio-political constructs of race and ethnicity more broadly, as this literature is also concerned with the understanding and operationalization of race and ethnicity as constructs.[3]

As the proposed revisions to the race and ethnicity question set are considered, evaluators of these items should reflect on not only the potential problems that respondents may experience when encountering them but how these constructs function in respondents’ lives. Attention to the meaning that survey respondents attach to race and ethnicity not only leads to question design informed by the socio-political context of these constructs, it also provides essential information to survey statistics users and will lead to more equitable federal data collection.


The findings and conclusions in this article are those of the authors and do not necessarily represent the official position of the National Center for Health Statistics, Centers for Disease Control and Prevention.

Zachary Smith, National Center for Health Statistics, qks4@cdc.gov


  1. See, for example, comments in Schneider (2023).

  2. For similar evaluation in the context of sex and gender identity, see Miller, Willson, and Ryan (2021) and Miller and Willson (2022).

  3. See, for example, in sociology, Brunsma, Embrick, and Nanney (2015), Suzuki (2017), and Brubaker (2009); in political science, Taylor (1996), Persons, ed. (1999), and Hutchings and Valentino (2004); and in the health sciences, Dressler, Oths, and Gravlee (2005), Lett et al. (2022), and Adkins-Jackson et al. (2021). These are far from an exhaustive list.

Submitted: June 20, 2023 EDT

Accepted: October 12, 2023 EDT

References

Adkins-Jackson, Paris B., Tongtan Chantarat, Zinzi D. Bailey, and Ninez A. Ponce. 2021. “Measuring Structural Racism: A Guide for Epidemiologists and Other Health Researchers.” American Journal of Epidemiology 191 (4): 539–47. https:/​/​doi.org/​10.1093/​aje/​kwab239.
Google ScholarPubMed CentralPubMed
Boeije, Hennie, and Gordon Willis. 2013. “The Cognitive Interviewing Reporting Framework (CIRF).” Methodology 9 (3): 87–95. https:/​/​doi.org/​10.1027/​1614-2241/​a000075.
Google Scholar
Brubaker, Rogers. 2009. “Ethnicity, Race, and Nationalism.” Annual Review of Sociology 35 (1): 21–42. https:/​/​doi.org/​10.1146/​annurev-soc-070308-115916.
Google Scholar
Brunsma, David L., David G. Embrick, and Megan Nanney. 2015. “Toward a Sociology of Race and Ethnicity.” Sociology of Race and Ethnicity 1 (1): 1–9. https:/​/​doi.org/​10.1177/​2332649214562028.
Google Scholar
Dressler, William W., Kathryn S. Oths, and Clarence C. Gravlee. 2005. “Race and Ethnicity in Public Health Research: Models to Explain Health Disparities.” Annual Review of Anthropology 34 (1): 231–52. https:/​/​doi.org/​10.1146/​annurev.anthro.34.081804.120505.
Google Scholar
Gerber, Eleanor R., and Tracy R. Wellens. 1997. “Perspectives on Pretesting : ‘Cognition’ in the Cognitive Interview?” Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique 55 (1): 18–39. https:/​/​doi.org/​10.1177/​075910639705500104.
Google Scholar
Hutchings, Vincent L., and Nicholas A. Valentino. 2004. “The Centrality of Race in American Politics.” Annual Review of Political Science 7 (1): 383–408. https:/​/​doi.org/​10.1146/​annurev.polisci.7.012003.104859.
Google Scholar
Lett, Elle, Emmanuella Asabor, Sourik Beltrán, Ashley Michelle Cannon, and Onyebuchi A. Arah. 2022. “Conceptualizing, Contextualizing, and Operationalizing Race in Quantitative Health Sciences Research.” The Annals of Family Medicine 20 (2): 157–63. https:/​/​doi.org/​10.1370/​afm.2792.
Google ScholarPubMed CentralPubMed
Miller, Kristen. 2003. “Conducting Cognitive Interviews to Understand Question-Response Limitations.” American Journal of Health Behavior 27 (1): S264–72. https:/​/​doi.org/​10.5993/​ajhb.27.1.s3.10.
Google Scholar
———. 2014. “Introduction.” In Cognitive Interviewing Methodology, edited by Kristen Miller, Valerie Chepp, Stephanie Willson, and José-Luis Padilla. Hoboken, NJ: John Wiley & Sons. https:/​/​doi.org/​10.1002/​9781118838860.
Google Scholar
Miller, Kristen, Valerie Chepp, Stephanie Willson, and José-Luis Padilla, eds. 2014. Cognitive Interviewing Methodology. Hoboken, NJ: John Wiley & Sons. https:/​/​doi.org/​10.1002/​9781118838860.
Google Scholar
Miller, Kristen, and Stephanie Willson. 2002. Cognitive Testing of NCHS Race Questions. Hyattsville, MD: National Center for Health Statistics. https:/​/​wwwn.cdc.gov/​qbank/​report.aspx?1018.
Google Scholar
———. 2022. Development and Evaluation of a Single, Non-Binary Gender Question for Population-Based Federal Health Surveys. Hyattsville, MD: National Center for Health Statistics. https:/​/​wwwn.cdc.gov/​QBank/​Report.aspx?1235.
Google Scholar
Miller, Kristen, Stephanie Willson, and Valerie Ryan. 2021. An Initial Cognitive Evaluation of a 2-Step Gender Identity Measure. Hyattsville, MD: National Center for Health Statistics. https:/​/​wwwn.cdc.gov/​QBank/​Report.aspx?1219.
Google Scholar
Office of Management and Budget. 2023. “Initial Proposals For Updating OMB’s Race and Ethnicity Statistical Standards.” Federal Register 88 (18): 5375. https:/​/​www.govinfo.gov/​content/​pkg/​FR-2023-01-27/​pdf/​2023-01635.pdf.
Google Scholar
Persons, Georgia A., ed. 1999. Race and Ethnicity in Comparative Perspective. Piscataway, New Jersey: Transaction Publishers.
Google Scholar
Schneider, Mike. 2023. “The U.S. Government Is Reconsidering What Race and Ethnicity Mean.” HuffPost. Associated Press. March 13, 2023. https:/​/​www.huffpost.com/​entry/​race-ethnicity-categories-us-government_n_640f943de4b0a3902d2c7779.
Suzuki, Kazuko. 2017. “A Critical Assessment of Comparative Sociology of Race and Ethnicity.” Sociology of Race and Ethnicity 3 (3): 287–300. https:/​/​doi.org/​10.1177/​2332649217708580.
Google Scholar
Taylor, Rupert. 1996. “Political Science Encounters ‘Race’ and ‘Ethnicity.’” Ethnic and Racial Studies 19 (4): 884–95. https:/​/​doi.org/​10.1080/​01419870.1996.9993940.
Google Scholar
Tourangeau, Roger. 1984. “Cognitive Science and Survey Methods: A Cognitive Perspective.” In Cognitive Aspects of Survey Design: Building a Bridge between Disciplines, edited by Thomas B. Jabine, Miron L. Straf, Judith M. Tanur, and Roger Tourangeau, 73–100. Washington, DC: National Academy Press.
Google Scholar
U.S. Department of Commerce. 2017. 2015 National Content Test Race and Ethnicity Analysis Report: A New Design for the 21st Century. Edited by Kelly Mathews, Jessica Phelan, Nicholas A. Jones, Sarah Konya, Beverly M. Pratt, Julia Coombs, and Michael Bentley. Washington, DC. https:/​/​www2.census.gov/​programs-surveys/​decennial/​2020/​program-management/​final-analysis-reports/​2015nct-race-ethnicity-analysis.pdf.
Google Scholar
Willis, Gordon B. 2015. Analysis of the Cognitive Interview in Questionnaire Design. Oxford: Oxford University Press.
Google Scholar
Willson, Stephanie, and Sheba Dunston. 2017. Cognitive Interview Evaluation of the Revised Race Question, with Special Emphasis on the Newly Proposed Middle Eastern/North African Response Option. Hyattsville, MD: National Center for Health Statistics. https:/​/​wwwn.cdc.gov/​qbank/​report.aspx?1183.
Google Scholar

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Powered by Scholastica, the modern academic journal management system