What is artificial intelligence explainability?
A working definition from MIT Sloan
artificial intelligence explainability (noun)
A quality that enables users of artificial intelligence programs to understand and trust how models operate and make decisions.
Creating successful artificial intelligence programs doesn’t end with building the right system. Stakeholders must have confidence that the programs are accurate and trustworthy.
According to research from Ida Someh, Barbara Wixom, and Cynthia Beath of the MIT Sloan Center for Information Systems Research, artificial intelligence explainability helps by ensuring users that models are “value-generating, compliant, representative, and reliable.”
There are several reasons stakeholders hesitate to trust AI. Because AI is relatively new, there isn’t an extensive list of proven use cases. Models are often opaque — AI relies on complex math and statistics, so it can be hard for average users to tell how a model works, whether it is producing accurate results, and if it is ethical and compliant.
Models can produce biased results if trained on biased data, and they also “drift” over time, meaning they can start producing inaccurate results as the world changes or incorrect data is included and replicated.
AI explainability is an emerging field, and teams working on AI projects are mostly creating the playbook as they go, the researchers write. Organizations can start by identifying units and organizations that are already creating effective AI explanations, continuing to test the most promising practices, and institutionalizing the best ones.
Working Definitions: Artificial Intelligence
MIT Sloan's Working Definitions explore the words and phrases behind emerging management ideas.
Leading the AI-Driven Organization
In person at MIT Sloan
Register Now
How to use ChatGPT to plan your retirement
MIT Sloan professor Andrew Lo says Al is good at explaining trade-offs and exploring scenarios but weak at precise tax optimization, math, and regulatory compliance.
How to accelerate AI transformation
A new MIT Sloan executive education course looks at how organizations need to align “the work, the workforce, and the workplace” to succeed with artificial intelligence.
Putting AI to work: The latest from MIT Sloan Management Review
New MIT Sloan Management Review insights cover types of AI startups, using agentic AI tools for knowledge work, and why AI isn't boosting productivity.
The AI developments finance pros should be tracking
A new MIT Sloan executive education course led by professor Andrew W. Lo explores machine reasoning, quantamental investing, AI governance, and more.
The Fed, AI, and economic uncertainty: What investors need to know
MIT Sloan’s Gary Gensler and Peter R. Fisher advise investors to develop an AI investment thesis and avoid overconfident investing during policy pivots.
3 ways to use AI: Are you a cyborg, a centaur, or a self-automator?
A study of consultants found that employees use generative AI three different ways. Each has different implications for on-the-job learning.
How algorithmic data deserts exclude consumers
As AI systems shape more decisions, some individuals and businesses are left out entirely. New research highlights how data gaps create hidden risks for organizations.
Where to look for generative AI risks
New research can help business leaders identify and address AI components that introduce risk, such as training data, foundation models, and user prompts.
Generative AI changes how employees spend their time
Software developers with access to a generative AI tool do more coding and less project management, according to a new study.
Use this AI tool to ask better questions, get more authentic results
The new Question Burst Catalyst tool from MIT Sloan’s Hal Gregersen helps leaders use LLMs as a creative thought partner to help surface ideas that truly matter.