What is algorithmic aversion?
A working definition from MIT Sloan
algorithmic aversion (noun)
The conscious or unconscious reluctance of human decision-makers to accept algorithmic recommendations.
Algorithms can help us make better decisions. But to follow their advice, humans must trust them. The way humans view algorithmic recommendations varies based on what they know about how the artificial intelligence model works and how it was created, according to research co-authored by MIT Sloan professor Kate Kellogg. Prior research assumed people are more likely to trust interpretable AI models, in which they can see how the models make their recommendations. But Kellogg found that this isn’t always true.
In an experiment at Tapestry, a New York-based house of lifestyle brands, product allocators were charged with maximizing sales. That involved placing the right number of items in the right stores at the right time. Product allocators received recommendations from either an interpretable algorithm or an uninterpretable machine learning algorithm.
Overall, the human allocators experienced less algorithmic aversion with the uninterpretable model than with the one they could more easily understand. Why? The researchers found that being able to troubleshoot the interpretable algorithm by reviewing its inner workings led allocators to sometimes overrule the recommendations. Meanwhile, knowing peers had developed and tested the uninterpretable algorithm made them more likely to accept its recommendations.
Why employees are more likely to second-guess interpretable algorithms
Working Definitions: Artificial Intelligence
MIT Sloan's Working Definitions explore the words and phrases behind emerging management ideas.
Leading the AI-Driven Organization
In person at MIT Sloan
Register Now
How to use ChatGPT to plan your retirement
MIT Sloan professor Andrew Lo says Al is good at explaining trade-offs and exploring scenarios but weak at precise tax optimization, math, and regulatory compliance.
How to accelerate AI transformation
A new MIT Sloan executive education course looks at how organizations need to align “the work, the workforce, and the workplace” to succeed with artificial intelligence.
Putting AI to work: The latest from MIT Sloan Management Review
New MIT Sloan Management Review insights cover types of AI startups, using agentic AI tools for knowledge work, and why AI isn't boosting productivity.
The AI developments finance pros should be tracking
A new MIT Sloan executive education course led by professor Andrew W. Lo explores machine reasoning, quantamental investing, AI governance, and more.
The Fed, AI, and economic uncertainty: What investors need to know
MIT Sloan’s Gary Gensler and Peter R. Fisher advise investors to develop an AI investment thesis and avoid overconfident investing during policy pivots.
3 ways to use AI: Are you a cyborg, a centaur, or a self-automator?
A study of consultants found that employees use generative AI three different ways. Each has different implications for on-the-job learning.
How algorithmic data deserts exclude consumers
As AI systems shape more decisions, some individuals and businesses are left out entirely. New research highlights how data gaps create hidden risks for organizations.
Where to look for generative AI risks
New research can help business leaders identify and address AI components that introduce risk, such as training data, foundation models, and user prompts.
Generative AI changes how employees spend their time
Software developers with access to a generative AI tool do more coding and less project management, according to a new study.
Use this AI tool to ask better questions, get more authentic results
The new Question Burst Catalyst tool from MIT Sloan’s Hal Gregersen helps leaders use LLMs as a creative thought partner to help surface ideas that truly matter.