The conversation surrounding artificial intelligence in the labor market has largely focused on the workforce. In what ways might AI assist humans? Where will it increase productivity or creativity? Which jobs will it make obsolete?
This framing neglects another part of the equation, according to Yunhao Zhang, SM ’20, PhD ’23, a postdoctoral fellow at the Psychology of Technology Institute. “This is the supply side of the equation, but there has been very little discussion about the demand side,” he said. “Ultimately, the work that AI does will be judged by whether or not consumers like it.”
With this in mind, Zhang and MIT Sloan senior lecturer and research scientist studied how people perceive work created by generative AI, humans, or some combination of the two. These findings are detailed in a new paper, “Human Favoritism, Not AI Aversion.”
They found that when people knew a product’s source, they expressed a positive bias toward content created by humans. Yet at the same time, and contrary to the traditional idea of “algorithmic aversion,” people expressed no aversion toward AI-generated content when they knew how it was created. In fact, when respondents were not told how content was created, they preferred AI-generated content.
For the study, the researchers created two tasks: writing marketing copy for five retail products, and drafting persuasive content for five uncontroversial campaigns (“eat less junk food,” for instance). They tackled these tasks in four different ways:
- For the human-only approach, they enlisted professional content creators from Accenture Research to draft the marketing copy and campaign goals.
- The augmented human approach first used AI (GPT-4) to generate ideas. Human consultants then shaped them into final products.
- The augmented AI approach worked the other way, with humans creating drafts and generative AI being used to mold them into final products.
- The AI-only approach had GPT-4 complete the task on its own.
Participants in the experiment were recruited to evaluate the quality of this work and were split into three groups. One group knew nothing about the content creation process. A second group knew about the four different approaches but was told nothing more. The third group knew which approach was responsible for each piece of content they viewed.
This nuanced method was essential to creating an accurate picture of the world, Gosline said. While many studies compare the work of humans against the work of AI, “in reality, our everyday experiences reflect a much more subtle gradation, where we have human decision-making shaped by algorithms or algorithms in which people are in the loop,” she said. “We wanted to get a fine-grained understanding of the various ways in which humans and AI can collaborate and, from that, get a better sense of what kinds of biases people hold.”
Two key insights emerged. First, when people had no information about the source of the marketing or campaign copy, they preferred the results generated by AI. “Generative AI is showing that it can be as good as or better than humans at these kinds of persuasive tasks,” Zhang said.
But when people were told the source of the content, their estimation of work in which humans were involved went up — they expressed “human favoritism,” as the researchers put it. Their assessment of content created by AI, though, didn’t change, undermining the notion that people harbor a form of algorithmic aversion.
“The most direct implication is that consumers really don’t mind content that’s produced by AI. They’re generally OK with it,” Zhang said. “At the same time, there’s great benefit in knowing that humans are involved somewhere along the line — that their fingerprint is present. Companies shouldn’t be looking to fully automate people out of the process.”
These findings should encourage firms to run genuine experiments on consumer perceptions of AI, the researchers said. Had Zhang and Gosline simply surveyed people about their thoughts, “they may have said something very different than what we actually observed,” Gosline said. These experiments provide a much clearer understanding of actual behavior.
This research also applies to an ever broadening part of the market as generative AI becomes increasingly easy to access and apply.
With the 2024 U.S. presidential election on the horizon, “questions of political persuasion feel very relevant, but so do applications in education, marketing, medicine, and so on,” Gosline said. “If you ask in what area information like this would be relevant, I would say I’m hard-pressed to think of an area that isn’t touched by this kind of thing. We ought to try to understand as much as we can about the ways people think about AI, given how quickly everything is moving.”