Artificial Intelligence
Pro-worker AI doesn’t just happen. Companies need to act
What you’ll learn about establishing pro-worker artificial intelligence:
- Build domain-specific, reliable AI systems.
- Design AI that supports skill development.
- Use interaction techniques to reduce blind reliance on AI.
- Adopt adaptive decision-support systems.
- Recognize that pro-worker AI requires deliberate design and policy intervention.
Imagine an electrician equipped with an artificial intelligence tool that can spot edge-case failures, surface insights from thousands of past jobs, and guide troubleshooting in real time. With the right design, that tool wouldn’t replace the electrician — it would make them more capable, efficient, and able to take on higher-value work.
That’s an example of pro-worker AI: AI that expands human capabilities rather than narrowing them.
Speaking on a panel at the launch of the MIT Stone Center on Inequality and Shaping the Future of Work, MIT economist said that similar possibilities exist in a variety of industries — plumbing, nursing, education, and other areas in which expertise depends on judgment and real-world context.
With domain-specific data and reliable, task-aware models, AI could help these workers handle complex cases, navigate edge conditions, and extend the range of tasks they are able to perform — all without eroding their skills or agency.
AI creates great possibilities for expanding human capabilities to make workers more effective, skilled, and valuable, Acemoglu argued, but only if organizations develop and implement the technology intentionally.
However, as AI continues to advance rapidly, it’s not clear whether it will become a collaborator that elevates workers or a force that displaces them. The outcome will be determined by the choices organizations are making now: how they design tools, the problems they choose to solve, and whose expertise they prioritize.
Future-Ready Enterprise Academy
In person at MIT Sloan
Register here
Why ad hoc adoption of worker-augmenting AI doesn’t work
Pro-worker AI tools depend on domain-specific data, reliable systems, and different architectures than those used in general-purpose AI models. They also require firms to overcome technical difficulties, collect the kinds of expert data needed to handle real-world cases, and build models that rely on more than broad, generalized training, Acemoglu said.
Despite AI’s promise, organizations are not naturally moving toward these approaches. The panel pointed to three major barriers:
- Business model misalignment: Most large AI vendors profit from automation, scale, and data capture — not from strengthening worker capabilities. “The business models of tech companies are not really aligned with that pro-worker dimension,” Acemoglu said.
- A one-size-fits-all architecture: General-purpose models are not designed to meet the highly contextual needs of professionals like electricians, teachers, or plumbers. Effective augmentation requires domain-specific data, reliability, and task awareness.
- The gravitational pull of artificial general intelligence: AGI aspires toward systems that do more on their own. Acemoglu warned that this ambition is “the enemy of pro-worker AI” because it shifts attention away from designing systems to enhance the human experience.
Without intervention, these dynamics make automation — not augmentation — the default path. “This can be the most pro-worker or anti-worker experience in our lives, but we have to actually start articulating a vision for this,” said panelist Ethan Mollick, SM ’04, PhD ’10, an assistant professor at the Wharton School of the University of Pennsylvania.
How to build pro-worker AI
Incoming MIT Sloan assistant professor Zana Buçinca and Acemoglu offered a set of design and strategy principles that organizations can apply today.
- Build domain-specific, reliable AI systems. Pro-worker AI relies on task-level knowledge, dependable performance, and architectures aligned with how experts actually work, not just broad statistical predictions.
- Design AI that supports skill development. Domain-specific explanations and learning-aware design can help workers improve their capabilities over time.
- Use interaction techniques to reduce blind reliance on AI. Cognitive-forcing functions, friction, and staged support can prevent overreliance on AI, Buçinca said. For example, users may be prompted to form an initial hypothesis before seeing an AI recommendation.
- Adopt adaptive decision-support systems. Policies based around reinforcement learning should tailor the level of AI assistance to user skill, task difficulty, and AI confidence.
- Recognize that pro-worker AI requires deliberate design and policy intervention. It will not emerge automatically from market forces or current AI research trajectories, Acemoglu said.
The panelists warned that once AI systems are widely adopted, their underlying design choices become much harder to redirect — another reason intentional development is essential. For organizations embarking on an AI journey, the panelists agreed, the time to support pro-worker AI is now.
Pro-Worker AI: What Is It, and How Do We Do It?
Daron Acemoglu is an Institute Professor of economics in the Department of Economics at MIT and is also affiliated with the National Bureau of Economic Research and the Center for Economic Policy Research. In 2024, Acemoglu and MIT Sloan professor Simon Johnson, along with James A. Robinson, received the Nobel Memorial Prize in Economic Sciences “for studies of how institutions are formed and affect prosperity.” Acemoglu and Johnson are also co-faculty directors at MIT’s Stone Center on Inequality and Shaping the Future of Work.
Zana Buçinca is a postdoctoral researcher at Microsoft in the Office of Applied Research. In fall 2026, she will join MIT as an assistant professor with a shared appointment at the MIT Sloan School of Management and MIT Electrical Engineering. She specializes in human-AI interaction techniques that complement people and amplify their values in AI-assisted work.
Ethan Mollick, SM ’04, PhD ’10, is an associate professor at the University of Pennsylvania’s Wharton School who studies AI, innovation, and startups. He is also co-director of Wharton’s Generative AI Labs and author of “Co-Intelligence: Living and Working With AI.”