Credit: Mimi Phan | SIRITAT TECHAPHALOKUL/Shutterstock
AI is reinventing hiring — with the same old biases. Here’s how to avoid that trap
In this opinion piece, MIT Sloan professor Emilio J. Castilla argues that:
- Algorithms promise objectivity, but in hiring, they’re learning human biases all too well.
- Until we build fairer systems for defining and rewarding talent, algorithms will simply mirror the inequities and unfairness we have yet to correct.
- The AI hiring revolution doesn’t have to be a story of automated bias. Asking tough questions before automating recruitment and selection can lead to fairer systems.
In my MIT Sloan classroom, I often ask executives and MBAs, “Who here believes AI can eliminate bias and unfairness in hiring?” Most hands go up. Then I show them the data, and their optimism fades.
One example: Amazon was forced to scrap its AI-driven recruitment tool after discovering that it penalized resumes containing the word “women” — as in “women’s chess club captain” or “women’s college.”
Another case: HireVue’s speech recognition algorithms, used by more than 700 companies, including Goldman Sachs and Unilever, were designed to assess candidates’ proficiency in speaking English. But research found that those algorithms disadvantaged non-white and deaf applicants.
Those are not isolated events; they are warnings — especially considering that the market for AI screening tools in hiring is projected to surpass $1 billion by 2027, with an estimated 87% of companies already having deployed these systems.
The appeal is clear: faster screening, lower costs, and the promise of bias-free hiring decisions. But the reality is more complex — and far more troubling.
The problem: Bad data in
AI tools don’t operate in a vacuum. They learn from existing data — which can be incomplete, poorly coded, or shaped by decades of exclusion and inequality. Feed this data into a machine, and the results aren’t fair. They represent bias and inefficiency at scale.
Some AI tools have downgraded resumes from graduates of historically Black colleges and women’s colleges because those schools haven’t traditionally fed into white-collar pipelines. Others have penalized candidates with gaps in employment, disadvantaging parents — especially mothers — who paused their careers for caregiving. What appears to be an objective evaluation is really a rerun of old prejudices, stereotypes, and other hiring mistakes, now stamped with the authority of data science.
Beware the “aura of neutrality”
This is the paradox of algorithmic meritocracy. Train an AI system on past hiring decisions — who passed the first screening, who got an interview, who was hired, and who was promoted — and it won’t necessarily learn fairness. But it will learn patterns that were likely shaped by flawed human assumptions.
And because these systems are marketed as “data-driven,” their decisions are harder to challenge. A manager’s judgment can be questioned; an algorithm’s ranking arrives with an aura of neutrality. We are teaching AI tools to potentially perpetuate every mistake, every prejudice, every lazy assumption that has shaped generations of bad decisions.
AI Executive Academy
In person at MIT Sloan
Register Now
First, check your assumptions
In my 2025 book, “The Meritocracy Paradox,” I argue that organizations invoking meritocracy without addressing structural challenges risk deepening the very gaps they seek to close. The same holds true for AI. Before we let AI automate hiring decisions, we need to carefully examine the data and the assumptions being encoded into these systems.
That means asking tough questions before automating candidate recruitment and selection: What data are we encoding? What processes are these algorithms built on, and are they still relevant to our organization’s needs? Who defines merit? Whose career paths are rewarded — or ignored?
AI won’t fix the problem of bias and inefficiency in hiring, because the problem isn’t technological. It’s human. Until we build fairer systems for defining and rewarding talent, algorithms will simply mirror the inequities and unfairness we have yet to correct.
AI as a turning point
The AI hiring revolution doesn’t have to be a story of automated bias or unfairness. It can be a turning point — a chance to reset how organizations define, measure, and reward talent, with the promise of employment opportunities for all. But that requires humility about what algorithms can — and cannot — do. Instead of using AI to avoid hard questions, we should use it to expose where our assumptions fall short and to locate and target issues in our talent management strategies.
That means engaging in continuous monitoring to catch inequities and inefficiencies, not executing one-time fixes. If we fail to confront these issues, the promise of “bias-free” AI will remain just that — a promise. And yesterday’s biases and stereotypes will quietly shape tomorrow’s workforce — one resume at a time.
Read next: Why talent management strategies go wrong — and how to fix them
Emilio J. Castilla is a professor of work and organization studies at MIT Sloan, co-director of the MIT Institute for Work and Employment Research, and author of “The Meritocracy Paradox: Where Talent Management Strategies Go Wrong and How to Fix Them” (Columbia University Press, 2025). Castilla’s research focuses on the organizational and social aspects of work and employment, with an emphasis on recruitment, hiring, development, and career management, as well as on the impact of teamwork and social relations on organizational performance and innovation. Recent work includes the role of worker voice in successful AI implementations and an examination of the effect of gendered language in job postings.