Credit: GamePixel / Shutterstock
Editor’s note: This article was adapted from the January 2026 edition of MIT Sloan’s monthly AI at Work newsletter.
With 2026 shaping up to be another consequential year for artificial intelligence, some MIT faculty members and researchers recently shared what they’re paying attention to when it comes to AI and work.
Here’s what they’re keeping tabs on.
The human-LLM accuracy gap
professor of the practice, AI/machine learning, MIT Sloan
“I will be paying attention to the human-large language model accuracy gap.
“The automation of knowledge work using LLMs is the key focus of many enterprise generative AI pilots. For certain tasks, the accuracy of the LLM may not be ‘good enough,’ and it may be tempting to conclude that the task is not a good fit for automation using LLMs. But rather than comparing the LLM’s accuracy to the best possible (i.e., 100%), it is better to compare it to the accuracy of humans doing the work right now and to track the changing human-LLM accuracy gap for that task. Maybe humans achieve 95% accuracy and the LLM achieves ‘only’ 90%.
“The key thing to remember is that as frontier LLMs get more capable, their accuracy will continue to improve, while human accuracy will likely be unchanged. So it is quite possible that LLM accuracy surpasses human accuracy in 2026 for many enterprise tasks.
“What are these tasks? How much business value do they represent? How much employment is at risk? These are some of the questions on my radar for 2026.”
Guardrails for AI
principal research scientist, MIT Center for Information Systems Research
“My colleague and I are deep in research regarding the guardrails that companies need to establish to effectively and safely deploy AI solutions without compromising compliance, values, ethics, and innovation. That’s a tough balance — the old governance playbooks are not working for AI because of the pace of change and such. We will continue to be on the lookout for emergent practices that help organizations adapt their governance so that AI solutions can reach scale and be sustained over time.”
Future-Ready Enterprise Academy
In person at MIT Sloan
Register here
What happens when humans outsource creativity to AI?
professor of applied economics, MIT Sloan
“Plasticity is the brain’s ability to change its structure and function through life in response to experiences. Therefore, when we stop solving differential equations, we forget how. When we stop doing calculus, we forget how. And when we stop using our brains to remember phone numbers and directions, we forget them. When the phone is a substitute for a sense of direction, we forget and become more dependent on Google Maps.
“For directions, I think it is OK if we forget. But what if we start using AI to replace experimentation, or creation, or what-if thought processes? Do we really want to forget these activities? What about entrepreneurship? What about art? What about music? I do think that the creativity — the authentic creativity — that humans have displayed through centuries is infinitely better than what any AI entity can do. The AI will try more things, with higher variance, and likely produce worse outcomes than those of individuals building upon each other.
“So how AI is implemented is a first-order concern. I have written a new paper with Isabella Loaiza exactly on this topic, and I believe we need to think harder about it.”
Understanding the inner workings of AI models
senior lecturer in managerial communication, MIT Sloan
“Looking ahead to 2026, I’m paying attention to mechanistic interpretability research for what it shows about the inner workings of AI models, both for our understanding and for potentially increasing safety and alignment.
“As Chris Olah of Anthropic says, the [generative AI] models are effectively grown through training rather than explicitly built or programmed, leading to an unusually opaque technology.
Mechinterp aims to reveal how specific neural networks function and what actually leads to the outputs we see. For AI at work, I’m eager to see how this helps us as users make better-grounded decisions, and I’m hopeful about its broader societal impact.”
Scaling AI solutions
senior lecturer in information technology, MIT Sloan
“This year will mark a shift in enterprises from experimenting with generative AI and agents to finding viable solutions that create real value at scale.
“With the hype around generative AI and agents, it’s essential to focus on the right question: What problem are you trying to solve? The answer will require finding the right combination of techniques — AI, traditional IT, and human — for each task in the solution.”
The LLM-ification of data
Harang Ju, digital fellow at the MIT Initiative on the Digital Economy and assistant professor, Johns Hopkins University
“I expect to see the LLM-ification of data as the primary trend playing out in 2026 and beyond. By ‘LLM-ification,’ I am referring to data sources in companies and in private company databases (such as your Apple Notes, for example) becoming easily accessible to LLM-based agents rather than being accessible only to humans through existing user interfaces.”
Rama Ramakrishnan is a professor of the practice for AI/machine learning at MIT Sloan. His interests center on the practical business application of predictive and generative AI techniques and the creation of intelligent products and services.
Barbara Wixom is a principal research scientist at the MIT Center for Information Systems Research. Her research explores how organizations generate business value from data assets.
Roberto Rigobon is a professor of applied economics at MIT Sloan and co-founder of the MIT Sloan Sustainability Initiative’s Aggregate Confusion Project, which studies how to improve ESG measures.
Melissa Webster is a senior lecturer in managerial communication at MIT Sloan. She investigates the adoption and implications of ChatGPT and other generative AI in both professional and educational realms.
George Westerman is a senior lecturer in information technology at MIT Sloan and the founder of the Global Opportunity Forum. His research bridges the fields of executive leadership and technology strategy, with an emphasis on digital transformation.
Harang Ju is a digital fellow at the MIT Initiative on the Digital Economy and an assistant professor at Johns Hopkins University. He studies AI agents and how they help people work.