What is algorithmic aversion?

A working definition from MIT Sloan

algorithmic aversion (noun)

The conscious or unconscious reluctance of human decision-makers to accept algorithmic recommendations.

Algorithms can help us make better decisions. But to follow their advice, humans must trust them. The way humans view algorithmic recommendations varies based on what they know about how the artificial intelligence model works and how it was created, according to research co-authored by MIT Sloan professor Kate Kellogg. Prior research assumed people are more likely to trust interpretable AI models, in which they can see how the models make their recommendations. But Kellogg found that this isn’t always true.

In an experiment at Tapestry, a New York-based house of lifestyle brands, product allocators were charged with maximizing sales. That involved placing the right number of items in the right stores at the right time. Product allocators received recommendations from either an interpretable algorithm or an uninterpretable machine learning algorithm.

Overall, the human allocators experienced less algorithmic aversion with the uninterpretable model than with the one they could more easily understand. Why? The researchers found that being able to troubleshoot the interpretable algorithm by reviewing its inner workings led allocators to sometimes overrule the recommendations. Meanwhile, knowing peers had developed and tested the uninterpretable algorithm made them more likely to accept its recommendations. 

Why employees are more likely to second-guess interpretable algorithms

Working Definitions: Artificial Intelligence

MIT Sloan's Working Definitions explore the words and phrases behind emerging management ideas.

A person in business attire holding a maestro baton orchestrating data imagery in the background

Leading the AI-Driven Organization

In person at MIT Sloan

Load More