recent

New database details AI risks

How to redesign work for the age of AI

MIT Sloan reading list: 7 books from 2024

Credit: iStock.com / pinstock

Ideas Made to Matter

Artificial Intelligence

AI ain’t for everyone — who trusts bots, and why

By

A doctor diagnoses a fever and aching throat as strep throat, and we accept the diagnosis and follow up on their instructions. Same scenario when an insurance or financial advisor makes a policy or investment recommendation — we hear what they have to say and typically execute on their plan.

But what happens when that same suggestion comes from an artificial intelligence-driven bot? What factors determine whether a human is wired to accept advice from machines? Are there certain types of people that are more predisposed than others to trust AI?

Such questions are becoming more consequential as organizations begin to deploy an army of bots to enhance customer interactions. In order to be successful, companies must figure out what exchanges can benefit from AI and which still require human-to-human interaction, said MIT Sloan professor Renée Gosline, a researcher with the MIT Initiative on the Digital Economy.

 

Gosline recently presented new findings that shed light on who is more likely to embrace AI-enhanced decision-making and offer guideposts for companies trying to effectively integrate robo-advisors and AI capabilities into their user experiences.

“The development of bots and algorithms to aid behavior is outpacing our understanding of who and when they help most,” said Gosline, whose research is co-authored by PhD student Heather Yang. “We need to think about the increasing outsourcing of decisions — who does it, when, and what the implications are when we do it.”

Detecting algorithmic appreciation

Behavioral science provides a lens for evaluating how people navigate the decision-making process. System 1 thinking comprises fast, near-automatic decisions — think slamming on car brakes to avoid a deer in the road.

System 2 thinking is a much more deliberate process: When choosing a wine to order with dinner, inputs can include your preference for a particular grape, what food is being served, or suggestions from friends.

To understand exactly who is more likely to rely on AI-enhanced decision-making, Gosline turned to the Cognitive Reflection Test, a measure of whether someone has a tendency toward System 1 processing (considered “lazy”) or System 2, more “effortful,” processing.

Along with the CRT, participants in the study were presented with a range of scenarios intended to test their aversion of or appreciation for algorithms. One instance asked them to choose between getting guidance from a human financial advisor or a bot on how to best manage an investment portfolio. Other instances covered health and hiring choices.

6 2

Algorithmic aversion is the norm for 62% of the population.

Initial studies conducted as part of the research found that participants had an ingoing prior bias: algorithmic aversion was the norm for 62% of the population, indicating they felt humans were generally more capable, empathic, and responsive.

Those respondents with algorithmic appreciation said they believed algorithms could be programmed to give fair and logical answers.

The study found a linear positive relationship between System 2 thinking and algorithmic appreciation. However, Gosline said she isn’t yet clear on the mechanism, because traditional factors like demographics, confidence in the domain, or past AI experience were not indicative of the predisposition.

The singular covariable identified was that respondents with self-rated social anxiety — meaning those that don’t feel confident in their ability to get along well with humans — are more likely to have algorithmic appreciation, she said.

A buddy system for scientists

Based on the results, Gosline is advocating for a buddy system that builds bridges between data scientists and behavioral science experts as companies develop their bots and AI-enhanced experiences.

“As we race to hire data scientists, we should have teams also comprised of behavioral scientists who understand the complexity of the behaviors we’re measuring behind these little dots,” she explained.

Gosline’s team is also looking into whether System 2 people are more vulnerable to algorithmic bias, which is important because this group is typically higher performing and associated with greater career success.

“These are powerful people,” she said. “If they are reliant on algorithms and those algorithms are biased, what are the implications for social inequality?”

Gosline’s full presentation can be viewed here. The full suite of videos from the annual conference of the MIT Initiative on the Digital Economy are available to the initiative’s corporate members; learn more about the program here.

For more info Tracy Mayor Senior Associate Director, Editorial (617) 253-0065