Credit: Anton Vierietin / Shutterstock
Scaling AI for results: Strategies from MIT Sloan Management Review
What you’ll learn:
- Enterprises can achieve AI value through small-scale efforts on three levels: boosting individual productivity, incorporating AI into defined tasks, and automating production processes.
- Vanguard Group estimates its AI ROI at close to $500 million, with proven use cases, including call center support, personalized adviser summaries, and a 25% improvement in programming productivity.
- Only 47% of business professionals say AI policies reflect work realities. Researchers recommend that front-line leaders write the rules while executives set guardrails.
New insights from MIT Sloan Management Review focus on achieving real results from investments in artificial intelligence. There’s also a word of warning about dictating rules about AI use from the corner office.
Get big value from small efforts
When MIT Sloan senior lecturers Melissa Webster and George Westerman sought examples of enterprises that had achieved major transformations using generative AI, they didn’t find any. What did they find? In a webinar, they explained that smart leaders get big value from small-scale AI efforts deployed through a measured, systematic approach that plays out on three levels:
- Create a safe environment for individual employees to boost their productivity. Common use cases include inbox management, meeting transcription, calendar optimization, and briefing preparation. Also of note: applying a different tone or set of cultural norms to business writing — helpful for Europeans addressing Americans, for example.
- Incorporate generative AI into well-defined tasks and roles. Developers can get help with writing code, analyzing data, and creating documentation. AI agents can provide sales and call center representatives with fast answers to common questions. Design teams can create proposals from a few lines of text and visualize ideas in the room with a client.
- Bring automation to production and operational processes. AI can assist marketing teams in creating entire campaigns, not just the content within them. Meanwhile, enterprise software packages can now automate processes ranging from managing supply chains to identifying skills gaps within the workforce — all supported with conversational AI interfaces.
To make progress on strategy, leaders need to balance immediate action with long-term thinking — to “build the scaffolding,” as one head of AI told Webster and Westerman. Along the way, it’s important to align AI efforts with core business capabilities; otherwise, a pilot project may be doomed to stay on the sidelines.
Watch ‘Scaling Generative AI: Get Big Value from Smaller Efforts’
Ensure that AI pays off
Vanguard Group estimates that its ROI from AI is close to $500 million. Using AI has helped the asset management company improve efficiency for contact center workers as well as advisers. That has allowed Vanguard to broaden access to human support for investors —and to roll out digital advisory services for clients investing as little as $100.
Columnists Thomas H. Davenport, a fellow at the MIT Initiative on the Digital Economy, and Randy Bean outline AI investments that have paid off for Vanguard:
- AI agents in the call center help representatives draw answers from internal content and resolve issues faster.
- Autogenerated, personalized summaries of Vanguard market perspectives help advisers keep clients informed.
- AI-assisted code generation has improved programming productivity by 25% and trimmed the system development life cycle by as much as 15%.
- A large language model analyzes corporate earnings calls for signals that dividends will be cut.
Amid these proven use cases are a few dozen pilots that IT leadership won’t roll out at scale “until the kinks have been worked out,” Davenport and Bean write. All the while, Vanguard continues to monitor AI model performance as well as utilization. The latter is a point of pride for the firm: Fifty percent of employees have completed training through the Vanguard AI Academy.
Read ‘Investing in AI Payoffs at Vanguard’
Leading the AI-Driven Organization
In person at MIT Sloan
Register Now
Understand how LLMs work
Knowing how large language models work “is a necessary foundation for making sound business decisions regarding the use of AI technologies in the enterprise,” writes MIT Sloan professor of the practice Rama Ramakrishnan. In a recent article, he shares answers to 10 common questions executives ask about AI. Among them:
- Models can answer questions about things that happened after the cutoff date of their training dataset, but only if they have access to live data.
- If you upload a document in a prompt, there’s no guarantee that the answer will be limited to that document, even if you make that request. The model will still refer to similar documents from its training set.
- Though recent models have context windows large enough to hold entire books, including too much or irrelevant information in a prompt can hurt performance. Models also “tend to focus on the beginning and end of a prompt and may miss important information in the middle.”
- Hallucinations can’t be eliminated. To avoid them, consider using a second model to verify your initial outputs, or focus on structured tasks and/or data formats that are easier to validate.
Read ‘How LLMs Work: Top 10 Executive-Level Questions’
Let team leaders write AI rules
When enterprises adapted to the internet, they didn’t create an Internet Department and require employees to seek approval to launch websites. The same should be true for AI, according to MIT Sloan senior lecturer Robert C. Pozen and Gentreo CEO Renee Fry: Executives should erect guardrails, but individual teams should define the rules for AI use.
Right now, that’s not happening: Only 47% of business professionals the authors surveyed said that AI policies reflect the realities of their work. This matters because different departments use AI differently. As the authors put it, “judgment is local,” and it should be up to front-line leaders to turn broad corporate policies into specific practices. If rules don’t apply to the day-to-day work of employees, they’ll either turn to back channels that put the company at risk or ignore the AI tools the company has invested in.
Executives must realize that “decentralization is not abdication,” Pozen and Fry write. Leadership must still define policies for privacy, security, intellectual property, and ethics; they also remain in charge of building AI platforms and training programs. From there, it’s up to managers to determine where, when, and how AI is implemented.
Read ‘For AI Productivity Gains, Let Team Leaders Write the Rules’