homehome Home chatchat Notifications


AIs show a CV racial bias and absolutely no one is surprised

Despite promises that AI would reduce racial bias, these models seem to only perpetuate it.

Mihai Andrei
November 15, 2024 @ 6:58 pm

share Share

Ever since large language models (LLMs) like ChatGPT became widely used, experts have warned that these systems could perpetuate or even worsen existing societal biases. Now, a new study has confirmed just that: AI models show a strong preference for white-associated names in hiring simulations, raising concerns about their role in perpetuating discrimination.

AI-generated image of a White and a Black job applicant
AI-generated image of a white and a black job applicant.

Two decades ago, economists conducted a landmark study where they sent out thousands of fictitious job applications to companies in Boston and Chicago. The applications were identical, except for the names — some were traditionally black-sounding, while others were white-sounding. The results were staggering: applicants with white names received 50% more callbacks.

Although the gap has narrowed over time, the bias remains. A recent study from this year involved sending out 83,000 fake job applications and found a 10% difference in callback rates. Despite the promises that AI would reduce human bias, there are signs that these models may not be living up to that expectation.

AI seems to dislike black applicants

Researchers from the University of Washington tested three cutting-edge LLMs by using over 500 job descriptions and 500 resumes. They focused on nine occupations including CEOs, teachers, accountants, and engineers.

The objective was to evaluate whether AI systems favored resumes with signals for race (black vs. white) and gender (male vs. female). They also analyzed whether these biases compounded for intersectional identities, such as black women.

The results were striking. Across three million resume-job comparisons, resumes with white-associated names were favored by the AI models in 85% of cases. In contrast, resumes with black-associated names were selected only 8.6% of the time. Although gender bias was less pronounced, male-associated names still had a slight advantage, being preferred just over 50% of the time.

Black males, in particular, were significantly disadvantaged. In some scenarios, they were completely overlooked in favor of white male candidates. Black female names fared slightly better but still faced substantial disadvantages compared to their white counterparts.

Why these biases appear

In some ways, LLMs still work as a “black box” — it’s not clear why they make some of the decisions they make. However, researchers believe they can explain at least a part of this effect.

For starters, it’s the training data. These models were trained on huge amounts of text — including internet text. This text can carry the same biases we carry as a society, and maybe even more. The models “learn” social stereotypes, in a way.

The second reason would be a frequency effect. If people in the black community have traditionally been underrepresented in some areas, the LLM could naturally perpetuate that trend, impacting selection.

Other factors may be at play as well, but it’s hard to disentangle these from the racial and gender influences.

How to eliminate the bias

At first glance, you’d say the answer is easy: just remove the name from CVs. This idea has been floating around for a while, but it may not be all that effective. The name is just one of the racial identifiers that AIs can detect. Educational institutions, locations, and even particular word choices can signal gender and racial identities. Removing the name can address a part of the problem, but only a part of it. Plus, removing names doesn’t address the root cause — the biases embedded in the language models themselves.

A Salesforce spokesperson told Geekwire that they don’t just blindly use these AI models. “Any models offered for production use go through rigorous testing for toxicity and bias before they’re released, and our AI offerings include guardrails and controls to protect customer data and prevent harmful outputs.” However, this is hard to actually verify.

A more thorough solution would be to modify the training data, adjusting algorithms to disregard specific identity markers, or debiasing embeddings. However, as the study notes, these solutions often reduce people’s identities to “same vs. different,” without acknowledging the unique challenges that marginalized groups face.

Perhaps the most challenging solution, but also potentially the most effective, is changing how we conceptualize professionalism. For instance, if certain words or phrases commonly associated with women (like “cared” or “collaborated”) are valued less by AI systems, we may need to reevaluate what we consider a “strong” resume. Language is context-dependent. Words associated with empathy or teamwork should be just as valued as those associated with leadership and assertiveness.

You should care about this

AI is poised to transform job recruitment. Tools like ChatGPT have made it easier to generate tailored job applications, while companies are increasingly using AI to screen resumes. And you’re probably already starting to see how this can be a problem.

If companies adopt these systems uncritically, they’re simply perpetuating existing biases. And often, they’re not actually hiring the best people for the job. This is both a social and a productivity problem. By replicating and even amplifying biases, AI-based resume screening tools could make it harder for certain groups to advance their careers. Ultimately, this can impact the economic and social mobility of entire communities.

In addition, these findings underscore the importance of transparent audits and regulatory oversight for AI hiring tools. It’s one thing to automate repetitive tasks, but when it comes to shaping people’s careers and livelihoods, fairness must be prioritized.

share Share

This 5,500-year-old Kish tablet is the oldest written document

Beer, goats, and grains: here's what the oldest document reveals.

A Huge, Lazy Black Hole Is Redefining the Early Universe

Astronomers using the James Webb Space Telescope have discovered a massive, dormant black hole from just 800 million years after the Big Bang.

Did Columbus Bring Syphilis to Europe? Ancient DNA Suggests So

A new study pinpoints the origin of the STD to South America.

The Magnetic North Pole Has Shifted Again. Here’s Why It Matters

The magnetic North pole is now closer to Siberia than it is to Canada, and scientists aren't sure why.

For better or worse, machine learning is shaping biology research

Machine learning tools can increase the pace of biology research and open the door to new research questions, but the benefits don’t come without risks.

This Babylonian Student's 4,000-Year-Old Math Blunder Is Still Relatable Today

More than memorializing a math mistake, stone tablets show just how advanced the Babylonians were in their time.

Sixty Years Ago, We Nearly Wiped Out Bed Bugs. Then, They Started Changing

Driven to the brink of extinction, bed bugs adapted—and now pesticides are almost useless against them.

LG’s $60,000 Transparent TV Is So Luxe It’s Practically Invisible

This TV screen vanishes at the push of a button.

Couple Finds Giant Teeth in Backyard Belonging to 13,000-year-old Mastodon

A New York couple stumble upon an ancient mastodon fossil beneath their lawn.

Worms and Dogs Thrive in Chernobyl’s Radioactive Zone — and Scientists are Intrigued

In the Chernobyl Exclusion Zone, worms show no genetic damage despite living in highly radioactive soil, and free-ranging dogs persist despite contamination.