recent

Drawing a line from colonialism to artificial intelligence

How AI-empowered ‘citizen developers’ drive digital transformation

Our top 5 ‘Working Definitions‘ of 2024

Ideas Made to Matter

Artificial Intelligence

Why employees are more likely to second-guess interpretable algorithms

By

More and more, workers are presented with algorithms to help them make better decisions. But humans must trust those algorithms to follow their advice. 

The way humans view algorithmic recommendations varies depending on how much they know about how the model works and how it was created, according to a new research paper co-authored by MIT Sloan professor 

Prior research has assumed that people are more likely to trust interpretable artificial intelligence models, in which they are able to see how the models make their recommendations. But Kellogg and co-researchers Tim DeStefano, Michael Menietti, and Luca Vendraminelli, affiliated with the Laboratory for Innovation Science at Harvard, found that this isn’t always true.

In a study looking at a large fashion company, the researchers found that human decision makers were more likely to accept advice from an uninterpretable algorithmic model, which was harder to question and interrogate. Counterintuitively, the researchers found that being able to look at how the model works led to lower rates of acceptance.

This leads to some key takeaways for how business leaders should introduce algorithms to the workplace, the researchers write. First, when people can see how an algorithm works, they might believe that they understand those inner workings better than they actually do — an issue leaders should be aware of when rolling out AI initiatives.

Second, including respected peers in the development and testing process of the algorithm makes it more likely that employees will accept its recommendations.

Addressing ‘algorithmic aversion’

Though algorithms often outperform humans, human decision makers can be reluctant to accept their suggestions, either consciously or unconsciously. Researchers call this algorithmic aversion, and it is especially prevalent when people are making decisions in highly uncertain environments such as medicine and financial investing.

The researchers studied how algorithms were received at Tapestry, Inc., a leading New York-based house of well-known accessories and lifestyle brands consisting of Coach, Kate Spade, and Stuart Weitzman.

Like other companies, Tapestry tries to improve product allocation — placing the right number of items in the right stores at the right time to maximize sales. This process is improved by using algorithms for guidance.

For the study, half of employee decisions about which products to send to stores, and how many, were made with recommendations from an interpretable algorithm, which was a weighted moving average of historic sales from previous three weeks.

The other half were made with recommendations from a machine learning algorithm that was harder to interpret, both because it was pattern-based rather than hypothesis-driven and because it was more complex, including data from the last 16 weeks as well as other information such as sales promotions and holidays. Both algorithms made recommendations that conflicted with allocators’ initial judgements.

Overall, the human allocators trusted the uninterpretable model more than the one that they could more easily understand. The researchers found this stemmed from two factors:

1. Overconfident troubleshooting.

When product allocators received counterintuitive recommendations, they tried to interrogate the reasoning behind the interpretable model and troubleshoot its recommendations. Being able to do this, as they could with the algorithm that followed explicit rules, resulted in lower acceptance of suggestions from the interpretable algorithm, according to the study.

Related Articles

How training for new tech threatens worker hierarchies
MIT Sloan research on AI and machine learning
Report finds employees embrace AI when they see its value

“Because they believed they understood the causes, effects, and inner workings of the algorithm, this often led them to overrule the algorithm’s recommendations,” the researchers write.

Interviews with employees showed that the allocators often created narratives to explain the relationship between model inputs and outputs, which led them to overrule the interpretable algorithm’s recommendations.

But in situations where interpretable and uninterpretable models performed at similar levels, allocators overruling the interpretable algorithm resulted in lower performance on the task (more frequent stockouts, lower sales quantity, and lower revenue).

2. Social proofing the algorithm.

The machine learning model was less intelligible, so employees didn’t question it. But they did know that their peers had helped develop and test the uninterpretable algorithm. Knowing this helped reduce their uncertainty, the researchers found, referring to the concept of social proof" — when people in ambiguous situations take guidance from how other people act, assuming that others have more knowledge about the current situation.

Knowing that people with their same knowledge base and experience developed the tools made people more likely to accept them.

“It was the combination of peer involvement in development with not being able to interrogate the uninterpretable model that made allocators more likely to accept recommendations from the uninterpretable model than from the interpretable model,” the researchers write.

Read next: For success machine learning tools, talk to end users

For more info Sara Brown Senior News Editor and Writer