November 4, 2024

AI Resume Screening Tools Biased Against Black Male Names, Study Finds

Black male candidates AI

AI-driven hiring tools overwhelmingly prefer resumes with names associated with white men, a new University of Washington (UW) study has found.

Resumes with white male names were selected 85% of the time, while those with female-associated names were chosen only 11% of the time.

By contrast, resumes with names associated with Black men fared the worst, with models passing them over in favor of other groups in nearly 100% of cases.

Biases in AI Resume Screening

AI-powered tools are becoming staples in the hiring process. For example, large language model (LLM)-based tools are used to sift through resumes and identify those most relevant to job postings.

UW researchers tested three million combinations of real-world resumes and job descriptions through open-source Massive Text Embedding (MTE) models from Salesforce, Contextual AI, and Mistral. These models are a type of LLM that make it easier to compare documents.

The researchers found that the models overwhelmingly preferred names associated with white men, even in fields like HR which traditionally employ more women.

AI reflects Societal biases

Kyra Wilson, a doctoral researcher at UW’s Information School, pointed out that these biases are rooted in societal structures embedded in the training data for AI models.

“The model learns from the training data, then reproduces or amplifies the same patterns,” she said according to GeekWire.

Salesforce and Contextual AI representatives stated their models were primarily for research, not hiring, which would require rigorous bias testing before real-world application.

Read: Black-Sounding Names Get Fewer Job Interviews At Largest U.S. Employers

Challenges in Regulation and Transparency

The “black box” nature of many commercial AI systems, where internal processes are opaque, makes assessing potential biases difficult.

Experts agree that reducing discrimination in AI hiring requires a more inclusive approach to data design, ensuring that training data reflects diverse experiences and identities.

Some legislative steps have been taken: New York City mandates disclosure for AI-assisted hiring, and California recently recognized intersectionality as a protected characteristic.

However, Kyra Wilson cautions that human involvement can sometimes worsen the issue if decision-makers place too much trust in AI.

Sara Keenan

Tech Reporter at POCIT. Following her master's degree in journalism, Sara cultivated a deep passion for writing and driving positive change for Black and Brown individuals across all areas of life. This passion expanded to include the experiences of Black and Brown people in tech thanks to her internship experience as an editorial assistant at a tech startup.