Resource | Brief

Language Bias in Child Welfare: Approaches to Identifying and Studying Biased Language to Advance Equitable Child Welfare Practice

Project: Technical Assistance on Evaluation for Discretionary Grant Programs

available downloads

Language is one of the most powerful indicators and transmitters of bias in child welfare and other human service settings.

Children who are Black, Indigenous, or other persons of color are disproportionately involved in child welfare, putting them at an early disadvantage and making them more vulnerable than white children to maltreatment in out-of-home care, longer stays in care, and a lower likelihood of reunification with their families. Implicit biases of child welfare professionals may drive inequitable responses to families and disparate outcomes.

This brief provides an overview of biased language in child welfare case practice and strategies to identify and study it. It covers concepts such as labeling, abstract versus concrete language, and sociolinguistic inequality, and it describes methods for detecting bias such as text mining, machine learning, and dictionary-based analysis. Using and building on existing research methods, tools, and data sources may improve understanding of language bias and contribute to more effective practices to address disparities in the child welfare system.