What is artificial intelligence explainability?
A working definition from MIT Sloan
artificial intelligence explainability (noun)
A quality that enables users of artificial intelligence programs to understand and trust how models operate and make decisions.
Creating successful artificial intelligence programs doesn’t end with building the right system. Stakeholders must have confidence that the programs are accurate and trustworthy.
According to research from Ida Someh, Barbara Wixom, and Cynthia Beath of the MIT Sloan Center for Information Systems Research, artificial intelligence explainability helps by ensuring users that models are “value-generating, compliant, representative, and reliable.”
There are several reasons stakeholders hesitate to trust AI. Because AI is relatively new, there isn’t an extensive list of proven use cases. Models are often opaque — AI relies on complex math and statistics, so it can be hard for average users to tell how a model works, whether it is producing accurate results, and if it is ethical and compliant.
Models can produce biased results if trained on biased data, and they also “drift” over time, meaning they can start producing inaccurate results as the world changes or incorrect data is included and replicated.
AI explainability is an emerging field, and teams working on AI projects are mostly creating the playbook as they go, the researchers write. Organizations can start by identifying units and organizations that are already creating effective AI explanations, continuing to test the most promising practices, and institutionalizing the best ones.
Working Definitions: Artificial Intelligence
MIT Sloan's Working Definitions explore the words and phrases behind emerging management ideas.
Leading the AI-Driven Organization
In person at MIT Sloan
Register Now
Action items for AI decision makers in 2026
AI industry watchers Thomas Davenport and Randy Bean expect the AI hype cycle to slow as organizations focus on infrastructure and strategy.
What 2 MIT experts are thinking about AI and work
David Autor and Neil Thompson on the importance of using AI tools to collaborate with humans and the nuances of how AI affects productivity and replaces workers.
‘This AI tool helped me define a target customer for my startup’
Learn how Spondi founder Jarret Heflin identified a highly motivated customer base by using the MIT Entrepreneurship JetPack digital adviser.
5 ‘heavy lifts’ of deploying AI agents
New research provides insights for using AI agents in clinical settings.
Future manufacturing: How to solve the US productivity paradox
U.S. manufacturing productivity remains flat despite an uptick in new factory and workforce growth in recent years. MIT experts suggest three solutions.
Agentic AI, explained
The age of agentic AI — systems that are semi- or fully autonomous and can act on their own — has arrived. Here’s what you need to know, according to MIT experts.
The 4 themes shaping new manufacturing
Stagnant productivity, a tightening workforce, and rising global competition are reprioritizing manufacturing in the U.S. What’s needed: new production models.
How to boost pro-worker AI in your company
As AI capabilities advance, the window for shaping whether the technology augments or replaces workers is narrowing. Decision makers need to step up, MIT researchers say.
4 takeaways for finance teams as they implement AI
Creativity and clean data are at the core of successful artificial intelligence implementations, according to the CFOs of Shopify and Arm Holdings.
The rise of smaller ‘meek models’ could democratize AI systems
AI systems built or run with limited resources could soon perform on par with leading larger models while costing much less, according to new research.