Credit: akinbostanci / iStock

Ideas Made to Matter

Artificial Intelligence

Why people favor AI in certain domains but not others

By

Despite the growing prevalence of artificial intelligence in day-to-day life — from drafting emails to driving cars and diagnosing illnesses — how people feel about AI remains muddy. One line of research demonstrates public distrust and a generally negative attitude toward AI, while another line shows mostly positive attitudes toward the technology.

So under what circumstances are people more willing to trust AI?

New research by MIT Sloan associate professor and colleagues explores this question, identifying the conditions that lead people to favor AI in certain domains and not others. The takeaway: People are open to AI in areas where they feel the technology performs better than humans and where the task at hand does not require personalization.

The personalization dimension is important but often overlooked. “Just because people perceive an AI technology as more capable than humans does not mean they will adopt it,” Lu said. “They also care about personalization. If either one of these conditions is not satisfied, then people exhibit an aversion to AI.”

Considering capability and personalization

Lu and a team of researchers from Sun Yat-sen University, Fudan University, and Shenzhen University proposed the Capability-Personalization Framework, which posits that when judging the application of AI, people focus on two key dimensions:

  1. Perceived capability of AI: Is AI perceived as more capable than humans at the task? Can it do the job better than a person?
  2. Perceived necessity for personalization: Does the task require human sensitivity or a personalized approach?

To test this framework, the researchers conducted a meta-analysis of more than 160 studies on various AI applications that involved over 80,000 participants. All of the studies empirically tested and compared preferences for AI versus humans in a specific context. For each study, a group of coders rated their perceived capability of AI in that context, as well as the task’s need for personalization.

Consistent with the Capability-Personalization Framework, the researchers found that:

  • AI appreciation occurs when AI is considered more capable than humans and the task does not require personalization. For example, when predicting future sales of a specific product or estimating the art period of a painting, people tend to prefer AI’s predictions over those of humans.
  • AI aversion occurs when AI is perceived as less competent than humans, when personalization is required, or both. In the case of diagnosing skin cancer or recommending movies, for instance, AI has proved itself to be as good as, if not better than, people. But because those tasks require personalization, people are reluctant to rely on AI. And while running an experiment with human participants may not require much personalization, in this domain AI has not yet proved itself competent, so again people demonstrate AI aversion.
A lightbulb with the abbreviation "Ai" on it seems to be flying like a rocket ship

AI Executive Academy

In person at MIT Sloan

How to improve AI adoption in your organization

The takeaway for managers rolling out AI initiatives in their organizations: “AI developers and organizations should pay attention not only to AI’s capability but also to its usage context,” Lu said. “AI will be appreciated only when it’s perceived as more capable and personalization is considered unnecessary.”

Additionally, developers and managers need to understand how different people in the same context might diverge in how they interpret the value of AI. A recruiter hiring for a job opening may believe that screening and ranking resumes is impersonal and that AI can do it better than humans; people who are applying to that job may perceive this same technology very differently, disliking its use because they consider job applications very personal.

Given such perceptions, organizations looking to use AI should prioritize contexts characterized by low personalization. Alternatively, they could try to reduce perceptions around the need for personalization. Consumers should be aware of their own biases and assess whether the capability of AI outweighs concerns about personalization.

“Maximizing AI’s potential means understanding when it’s welcome — and when it’s not,” Lu said. “Only by addressing both capability and personalization can we move toward meaningful human-AI collaboration.”

Lu’s research colleagues were professor Xin Qin, associate professor Chen Chen, doctoral students Hansen Zhou and Xiaowei Dong, and postdoctoral fellow Limei Cao from Sun Yat-sen University; assistant professor Xiang Zhou from Shenzhen University; and associate professor Dongyuan Wu from Fudan University.

Read next: These human capabilities complement AI’s shortcomings

For more info Tracy Mayor Senior Associate Director, Editorial (617) 253-0065