What is algorithmic aversion?
A working definition from MIT Sloan
algorithmic aversion (noun)
The conscious or unconscious reluctance of human decision-makers to accept algorithmic recommendations.
Algorithms can help us make better decisions. But to follow their advice, humans must trust them. The way humans view algorithmic recommendations varies based on what they know about how the artificial intelligence model works and how it was created, according to research co-authored by MIT Sloan professor Kate Kellogg. Prior research assumed people are more likely to trust interpretable AI models, in which they can see how the models make their recommendations. But Kellogg found that this isn’t always true.
In an experiment at Tapestry, a New York-based house of lifestyle brands, product allocators were charged with maximizing sales. That involved placing the right number of items in the right stores at the right time. Product allocators received recommendations from either an interpretable algorithm or an uninterpretable machine learning algorithm.
Overall, the human allocators experienced less algorithmic aversion with the uninterpretable model than with the one they could more easily understand. Why? The researchers found that being able to troubleshoot the interpretable algorithm by reviewing its inner workings led allocators to sometimes overrule the recommendations. Meanwhile, knowing peers had developed and tested the uninterpretable algorithm made them more likely to accept its recommendations.
Why employees are more likely to second-guess interpretable algorithms
Working Definitions: Artificial Intelligence
MIT Sloan's Working Definitions explore the words and phrases behind emerging management ideas.
Leading the AI-Driven Organization
In person at MIT Sloan
Register Now
3 ways to use AI: Are you a cyborg, a centaur, or a self-automator?
A study of consultants found that employees use generative AI three different ways. Each has different implications for on-the-job learning.
How algorithmic data deserts exclude consumers
As AI systems shape more decisions, some individuals and businesses are left out entirely. New research highlights how data gaps create hidden risks for organizations.
Where to look for generative AI risks
New research can help business leaders identify and address AI components that introduce risk, such as training data, foundation models, and user prompts.
Generative AI changes how employees spend their time
Software developers with access to a generative AI tool do more coding and less project management, according to a new study.
Use this AI tool to ask better questions, get more authentic results
The new Question Burst Catalyst tool from MIT Sloan’s Hal Gregersen helps leaders use LLMs as a creative thought partner to help surface ideas that truly matter.
Action items for AI decision makers in 2026
AI industry watchers Thomas Davenport and Randy Bean expect the AI hype cycle to slow as organizations focus on infrastructure and strategy.
What 2 MIT experts are thinking about AI and work
David Autor and Neil Thompson on the importance of using AI tools to collaborate with humans and the nuances of how AI affects productivity and replaces workers.
‘This AI tool helped me define a target customer for my startup’
Learn how Spondi founder Jarret Heflin identified a highly motivated customer base by using the MIT Entrepreneurship JetPack digital adviser.
5 ‘heavy lifts’ of deploying AI agents
New research provides insights for using AI agents in clinical settings.
Future manufacturing: How to solve the US productivity paradox
U.S. manufacturing productivity remains flat despite an uptick in new factory and workforce growth in recent years. MIT experts suggest three solutions.