Credit: akinbostanci / Shutterstock
Four tensions that organizations face when using agentic AI:
- Scalability versus adaptability. Constraining agentic systems too much limits their effectiveness, while granting too much freedom can introduce unpredictability.
- Experience versus expediency. Agentic AI forces organizations to rethink how they assess cost, timing, and return on investment.
- Supervision versus autonomy. Excessive supervision of agentic AI systems can negate the benefits of autonomy, while insufficient oversight exposes organizations to risks.
- Retrofit versus reengineer. Organizations must decide whether to quickly retrofit agentic AI into existing workflows or take the time to reimagine those workflows altogether.
A growing share of organizations are no longer just experimenting with artificial intelligence — they’re beginning to delegate work to it. According to a new report from MIT Sloan Management Review and Boston Consulting Group, “The Emerging Agentic Enterprise: How Leaders Must Navigate a New Age of AI,” more than a third of surveyed companies are already deploying agentic AI systems that can plan, act, and learn autonomously, with another 44% planning to do so. These systems don’t just assist employees and automate isolated tasks; they pursue goals, coordinate across workflows, and adapt based on outcomes.
That shift marks a fundamental change in how work gets done and how leaders must think about AI, according to the report. As agentic systems are viewed less like tools and more like teammates, organizations face a new set of strategic choices about control, investment, governance, and design.
The report, which is based on a global survey with more than 2,000 respondents, framed these choices as four core tensions that leaders must navigate if they hope to capture value from agentic AI without introducing new risks.
1. The flexibility tension: Scalability versus adaptability
Traditional automation succeeds at scale, performing predefined tasks faster and more cheaply. Agentic AI, on the other hand, sits between tools and human workers, deriving much of its value from adaptability — how it responds to changing conditions and learns over time.
Leaders face a trade-off, according to the report. Constraining agentic systems too tightly can limit their effectiveness, but granting too much freedom can introduce unpredictability. The organizations best positioned to benefit from agentic AI are those that design processes to accommodate both scale and learning, treating adaptability as a strategic capability and not a by-product.
2. The investment tension: Experience versus expediency
Agentic AI also forces organizations to rethink how they assess cost, timing, and return on investment, the report authors write. Unlike traditional tools that depreciate predictably or workers whose value grows steadily with experience, agentic systems simultaneously depreciate through model drift and appreciate through continuous learning and emergent capabilities.
This creates new investment tensions around when and how to invest in a rapidly evolving technology. Conventional financial models struggle to capture these dynamics, often leading organizations to undervalue long-term, compounding returns. As a result, the researchers found, companies that rely on traditional investment frameworks risk underinvesting in learning and adaptation, while those that adopt hybrid investment models and diversified AI portfolios are better positioned to capture sustained value.
3. The control tension: Supervision versus autonomy
Because agentic AI systems can act independently yet behave unpredictably, managing them is complex. Excessive supervision can negate the benefits of autonomy, while insufficient oversight can expose organizations to operational, compliance, and reputational risks, according to the report.
Agentic AI must be managed more like a human coworker than like a traditional tool, the researchers write. That requires governance models that define when systems can act autonomously and when human intervention is required. Rather than relying on static controls, leaders must develop dynamic, risk-based oversight mechanisms that adjust based on context, performance, and learning.
Leading the AI-Driven Organization
In person at MIT Sloan
Register Now
4. The scope tension: Retrofit versus reengineer
Finally, organizations must decide whether to retrofit agentic AI into existing workflows for quick, incremental gains or reimagine those workflows altogether. Many early deployments layer agentic capabilities onto legacy processes because it requires less time and capital and delivers faster results than scrapping existing systems.
The greatest gains, however, come when leaders rethink work from first principles — redesigning processes around hybrid teams of humans and AI agents. The report notes that while this approach demands greater upfront investment and longer timelines, it can unlock new operating models and sources of competitive advantage. Leaders must weigh those benefits against the risk that rapidly advancing AI technology could outpace long-term efforts.
The importance of leadership
Overall, the enthusiasm for agentic AI is running ahead of organizational readiness, the report finds. Many companies are deploying these systems without fully addressing the strategic tensions they introduce, particularly around governance, talent, and accountability.
For organizations to succeed, they need an understanding that agentic AI is not just a technology upgrade — it represents a management inflection point. Leaders who treat it as such by investing not only in systems but in structures, skills, and strategy will be better positioned to navigate the emerging agentic enterprise.
Read the report: “The Emerging Agentic Enterprise”
This article is based on the 2025 Artificial Intelligence and Business Strategy report from MIT Sloan Management Review and Boston Consulting Group.
The report’s authors are Sam Ransbotham, David Kiron, Shervin Khodabandeh, Sesh Iyer, and Amartya Das. Sam Ransbotham is a professor of analytics at Boston College, as well as guest editor for MIT Sloan Management Review’s Artificial Intelligence and Business Strategy Big Ideas initiative. David Kiron is the editorial director, research, of MIT SMR and program lead for its Big Ideas research initiatives. Shervin Khodabandeh is a managing director and senior partner at BCG, the coleader of its AI business in North America, and a leader in BCG X. Sesh Iyer is a managing director and senior partner at BCG and the North America chair for BCG X, where he helps clients drive large-scale AI transformations. Amartya Das is a principal at BCG and currently serves as an ambassador at the BCG Henderson Institute, where he leads research on the impact of technology and AI on society.