What is artificial intelligence explainability?
A working definition from MIT Sloan
artificial intelligence explainability (noun)
A quality that enables users of artificial intelligence programs to understand and trust how models operate and make decisions.
Creating successful artificial intelligence programs doesn’t end with building the right system. Stakeholders must have confidence that the programs are accurate and trustworthy.
According to research from Ida Someh, Barbara Wixom, and Cynthia Beath of the MIT Sloan Center for Information Systems Research, artificial intelligence explainability helps by ensuring users that models are “value-generating, compliant, representative, and reliable.”
There are several reasons stakeholders hesitate to trust AI. Because AI is relatively new, there isn’t an extensive list of proven use cases. Models are often opaque — AI relies on complex math and statistics, so it can be hard for average users to tell how a model works, whether it is producing accurate results, and if it is ethical and compliant.
Models can produce biased results if trained on biased data, and they also “drift” over time, meaning they can start producing inaccurate results as the world changes or incorrect data is included and replicated.
AI explainability is an emerging field, and teams working on AI projects are mostly creating the playbook as they go, the researchers write. Organizations can start by identifying units and organizations that are already creating effective AI explanations, continuing to test the most promising practices, and institutionalizing the best ones.
Working Definitions: Artificial Intelligence
MIT Sloan's Working Definitions explore the words and phrases behind emerging management ideas.
Leading the AI-Driven Organization
In person at MIT Sloan
Register Now
3 ways to use AI: Are you a cyborg, a centaur, or a self-automator?
A study of consultants found that employees use generative AI three different ways. Each has different implications for on-the-job learning.
How algorithmic data deserts exclude consumers
As AI systems shape more decisions, some individuals and businesses are left out entirely. New research highlights how data gaps create hidden risks for organizations.
Where to look for generative AI risks
New research can help business leaders identify and address AI components that introduce risk, such as training data, foundation models, and user prompts.
Generative AI changes how employees spend their time
Software developers with access to a generative AI tool do more coding and less project management, according to a new study.
Use this AI tool to ask better questions, get more authentic results
The new Question Burst Catalyst tool from MIT Sloan’s Hal Gregersen helps leaders use LLMs as a creative thought partner to help surface ideas that truly matter.
Action items for AI decision makers in 2026
AI industry watchers Thomas Davenport and Randy Bean expect the AI hype cycle to slow as organizations focus on infrastructure and strategy.
What 2 MIT experts are thinking about AI and work
David Autor and Neil Thompson on the importance of using AI tools to collaborate with humans and the nuances of how AI affects productivity and replaces workers.
‘This AI tool helped me define a target customer for my startup’
Learn how Spondi founder Jarret Heflin identified a highly motivated customer base by using the MIT Entrepreneurship JetPack digital adviser.
5 ‘heavy lifts’ of deploying AI agents
New research provides insights for using AI agents in clinical settings.
Future manufacturing: How to solve the US productivity paradox
U.S. manufacturing productivity remains flat despite an uptick in new factory and workforce growth in recent years. MIT experts suggest three solutions.