recent

How AI-empowered ‘citizen developers’ drive digital transformation

Our top 5 ‘Working Definitions‘ of 2024

How to manage two types of generative AI

Credit: Rob Dobi

Ideas Made to Matter

Artificial Intelligence

How should AI-generated content be labeled?

By

In late October, President Joe Biden issued a wide-ranging executive order on AI security and safety. The order includes new standards and best practices for clearly labeling AI-generated content, in part to help Americans determine whether communications that appear to be from the government are authentic.

This points to a concern that as generative AI becomes more widely used, manipulated content could easily spread false information. As the executive order indicates, content labels are one strategy for combatting the spread of misinformation. But what are the right terms to use? Which ones will be widely understood by the public as indicating that something has been generated or manipulated by artificial intelligence technology or is intentionally misleading?

A new working paper co-authored by MIT Sloan professor  found that across the United States, Mexico, Brazil, India, and China, people associated certain terms, such as “AI generated” and “AI manipulated,” most closely with content created using AI. Conversely, the labels “deepfake” and “manipulated” were most associated with misleading content, whether AI created it or not.

These results show that most people have a reasonable understanding of what “AI” means, which is a good starting point. They also suggest that any effort to label content needs to consider the overarching goal, said Rand, a professor of management science and brain and cognitive sciences. Rand co-authored the paper with Ziv Epstein, SM ’19 and PhD ’23, a postdoctoral fellow at Stanford; MIT graduate researcher Cathy Fang, SM ’23; and Antonio A. Arechar, a professor at the Center for Research and Teaching in Economics in Aguascalientes, Mexico.

Rand also co-authored a recent policy brief about labeling AI-generated content

“A lot of AI-generated content is not misleading, and a lot of misleading content is not AI-generated,” Rand said. “Is the concern really about AI-generated content per se, or is it more about misleading content?”

Looking at how people understand various AI-related terms

Governments, technology companies, and industry associations are wrestling with how to let viewers know that they are viewing artificially generated content, given that face-swapping and voice imitation tools can be used to create misleading content, and images can be generated that falsely depict people in compromising situations.

In addition to the recent executive order, U.S. Rep. Ritchie Torres has proposed the AI Disclosure Act of 2023, which would require a disclaimer on any content — including videos, photos, text, or audio — generated by AI. Meanwhile, the Coalition for Content Provenance and Authenticity has developed an open technical standard for tracing the origins of content and determining whether it has been manipulated.

Disclaimers, watermarks, or other labels would be useful to indicate how content was created or whether it is misleading; in fact, studies have indicated that social media users are less likely to believe or share content labeled as misleading. But before trying to label content that is generated by AI, platforms and policymakers need to know which terms are widely understood by the general population. If labels use a term that is overly jargony or confusing, it could interfere with the label’s goal.

To look at what terms were understood correctly most often, the researchers surveyed more than 5,100 people across five countries in four languages. Participants were randomly assigned one of nine terms: “AI generated,” “generated with an AI tool,” “artificial,” “synthetic,” “deepfake,” “manipulated,” “not real,” “AI manipulated,” or “edited.” They were then shown descriptions of 20 different content types and asked whether the assigned term applied to each type of content.

The phrases “AI generated,” “generated with an AI tool,” and “AI manipulated” were most closely associated with content generated using AI.

Alternatively, the researchers found that “deepfake” and “manipulated” were most closely associated with potentially misleading content. Terms such as “edited,” “synthetic,” or “not real” were not closely associated with either AI-generated content or misleading content.

The results were similar among the participants, regardless of age, gender, education, digital literacy, and familiarity with AI.

“The differences between ‘AI manipulated’ and ‘manipulated’ are quite striking: Simply adding the ‘AI’ qualifier dramatically changed which pieces of content participants understood the term as applying [to],” the researchers write.

The purpose of an AI label

Content labels could serve two different purposes. One is to indicate that content was generated using AI. The other is to show that the content could mislead viewers, whether created by AI or not. That will be an important consideration as momentum builds to label AI generated content.

Related Articles

The legal issues presented by generative AI
AI needs to be more ‘pro-worker.’ These 5 policies can help
MIT Sloan research about social media and misinformation

“It could make sense to have different labels for misleading content that is AI-generated, versus content that’s not AI-generated,” Rand said.

How the labels are generated will also matter. Self-labeling has obvious disadvantages, as few creators will willingly admit that their content is intentionally misleading. Machine learning, crowdsourcing, and digital forensics are viable options, though relying on those approaches will become more challenging as the lines between content made by humans and generated by computers continue to blur. And under the principle of implied authenticity, the more content that gets labeled, the more that content without a label is assumed to be real.

Finally, researchers found that some labels will not work everywhere. For example, in the study, Chinese speakers associated the word “artificial” with human involvement, whereas the term connotes automation in English, Portuguese, and Spanish.

“You can’t just take labels shown to work well in the United States and blindly apply them cross-culturally,” Rand said. “Testing of labels will need to be done in different countries to ensure that terms resonate.”

Read the paper: What label should be applied to content produced by generative AI?

Read next: Study gauges how people perceive AI-generated content

For more info Sara Brown Senior News Editor and Writer