homehome Home chatchat Notifications


AIs show a CV racial bias and absolutely no one is surprised

Despite promises that AI would reduce racial bias, these models seem to only perpetuate it.

Mihai Andrei
November 15, 2024 @ 6:58 pm

share Share

Ever since large language models (LLMs) like ChatGPT became widely used, experts have warned that these systems could perpetuate or even worsen existing societal biases. Now, a new study has confirmed just that: AI models show a strong preference for white-associated names in hiring simulations, raising concerns about their role in perpetuating discrimination.

AI-generated image of a White and a Black job applicant
AI-generated image of a white and a black job applicant.

Two decades ago, economists conducted a landmark study where they sent out thousands of fictitious job applications to companies in Boston and Chicago. The applications were identical, except for the names — some were traditionally black-sounding, while others were white-sounding. The results were staggering: applicants with white names received 50% more callbacks.

Although the gap has narrowed over time, the bias remains. A recent study from this year involved sending out 83,000 fake job applications and found a 10% difference in callback rates. Despite the promises that AI would reduce human bias, there are signs that these models may not be living up to that expectation.

AI seems to dislike black applicants

Researchers from the University of Washington tested three cutting-edge LLMs by using over 500 job descriptions and 500 resumes. They focused on nine occupations including CEOs, teachers, accountants, and engineers.

The objective was to evaluate whether AI systems favored resumes with signals for race (black vs. white) and gender (male vs. female). They also analyzed whether these biases compounded for intersectional identities, such as black women.

The results were striking. Across three million resume-job comparisons, resumes with white-associated names were favored by the AI models in 85% of cases. In contrast, resumes with black-associated names were selected only 8.6% of the time. Although gender bias was less pronounced, male-associated names still had a slight advantage, being preferred just over 50% of the time.

Black males, in particular, were significantly disadvantaged. In some scenarios, they were completely overlooked in favor of white male candidates. Black female names fared slightly better but still faced substantial disadvantages compared to their white counterparts.

Why these biases appear

In some ways, LLMs still work as a “black box” — it’s not clear why they make some of the decisions they make. However, researchers believe they can explain at least a part of this effect.

For starters, it’s the training data. These models were trained on huge amounts of text — including internet text. This text can carry the same biases we carry as a society, and maybe even more. The models “learn” social stereotypes, in a way.

The second reason would be a frequency effect. If people in the black community have traditionally been underrepresented in some areas, the LLM could naturally perpetuate that trend, impacting selection.

Other factors may be at play as well, but it’s hard to disentangle these from the racial and gender influences.

How to eliminate the bias

At first glance, you’d say the answer is easy: just remove the name from CVs. This idea has been floating around for a while, but it may not be all that effective. The name is just one of the racial identifiers that AIs can detect. Educational institutions, locations, and even particular word choices can signal gender and racial identities. Removing the name can address a part of the problem, but only a part of it. Plus, removing names doesn’t address the root cause — the biases embedded in the language models themselves.

A Salesforce spokesperson told Geekwire that they don’t just blindly use these AI models. “Any models offered for production use go through rigorous testing for toxicity and bias before they’re released, and our AI offerings include guardrails and controls to protect customer data and prevent harmful outputs.” However, this is hard to actually verify.

A more thorough solution would be to modify the training data, adjusting algorithms to disregard specific identity markers, or debiasing embeddings. However, as the study notes, these solutions often reduce people’s identities to “same vs. different,” without acknowledging the unique challenges that marginalized groups face.

Perhaps the most challenging solution, but also potentially the most effective, is changing how we conceptualize professionalism. For instance, if certain words or phrases commonly associated with women (like “cared” or “collaborated”) are valued less by AI systems, we may need to reevaluate what we consider a “strong” resume. Language is context-dependent. Words associated with empathy or teamwork should be just as valued as those associated with leadership and assertiveness.

You should care about this

AI is poised to transform job recruitment. Tools like ChatGPT have made it easier to generate tailored job applications, while companies are increasingly using AI to screen resumes. And you’re probably already starting to see how this can be a problem.

If companies adopt these systems uncritically, they’re simply perpetuating existing biases. And often, they’re not actually hiring the best people for the job. This is both a social and a productivity problem. By replicating and even amplifying biases, AI-based resume screening tools could make it harder for certain groups to advance their careers. Ultimately, this can impact the economic and social mobility of entire communities.

In addition, these findings underscore the importance of transparent audits and regulatory oversight for AI hiring tools. It’s one thing to automate repetitive tasks, but when it comes to shaping people’s careers and livelihoods, fairness must be prioritized.

share Share

Some people are just wired to like music more, study shows

Most people enjoy music to some extent. But while some get goosebumps from their favorite song, others don’t really feel that much. A part of that is based on our culture. But according to one study, about half of it is written in our genes. In one of the largest twin studies on musical pleasure […]

This Stinky Coastal Outpost Made Royal Dye For 500 Years

Archaeologists have uncovered a reeking, violet-stained factory where crushed sea snails once fueled the elite’s obsession with royal purple.

Researchers analyzed 10,000 studies and found cannabis could actually fight cancer

Scientists used AI to scan a huge number of papers and found cannabis gets a vote of confidence from science.

Scientists Found a Way to Turn Falling Rainwater Into Electricity

It looks like plumbing but acts like a battery.

AI Made Up a Science Term — Now It’s in 22 Papers

A mistranslated term and a scanning glitch birthed the bizarre phrase “vegetative electron microscopy”

Elon Musk could soon sell missile defense to the Pentagon like a Netflix subscription

In January, President Donald Trump signed an executive order declaring missile attacks the gravest threat to America. It was the official greenlight for one of the most ambitious military undertakings in recent history: the so-called “Golden Dome.” Now, just months later, Elon Musk’s SpaceX and two of its tech allies—Palantir and Anduril—have emerged as leading […]

She Can Smell Parkinson’s—Now Scientists Are Turning It Into a Skin Swab

A super-smeller's gift could lead to an early, non-invasive Parkinson's test.

This Caddisfly Discovered Microplastics in 1971—and We Just Noticed

Decades before microplastics made headlines, a caddisfly larva was already incorporating synthetic debris into its home.

Have scientists really found signs of alien life on K2-18b?

Extraordinary claims require extraordinary evidence. We're not quite there.

A Forgotten 200-Year-Old Book Bound in a Murderer’s Skin Was Just Found in a Museum Office

It's the ultimate true crime book.