Credit: Igor Kutyaev / iStock
Big-picture AI thinking: 3 webinars from MIT Sloan Management Review
By
At this moment in artificial intelligence implementation, enterprise leaders must consider their next moves carefully. Three recent webinars from MIT Sloan Management Review highlight how organizations can identify and deploy use cases for AI that will avoid questionable ethics, create value while keeping costs in check, and support employees.
Busting through AI’s hype
Even amid the hype-filled early days of the internet, there was clear transformative potential for communication, commerce, and networking, according to , an MIT Institute professor and Nobel laureate. The same cannot be said for AI. While it works well for repetitive cognitive tasks, it’s unclear how AI will benefit workers in roles as diverse as entertainment, custodial work, and leadership, he said.
Acemoglu has predicted that AI could profitably automate only 5% of tasks and add just 1% to global GDP over the next decade. That said, he sees opportunities to use AI to create value in the future, whether it’s making financial services more inclusive or caring for the United States’ increasingly aging population.
“The hype is an enemy of business success,” Acemoglu said. “Instead, think of where your best resources — your human resources — can be better deployed … together with technology and together with data to increase people’s efficiency, and enable them to create better and new goods and services.”
This strategy is sound, Acemoglu said, because it augments the work of employees (instead of replacing them outright) and focuses on creating products (instead of cutting costs or eliminating roles). Leaders can position themselves for success by partnering with employees to identify where new services are in high demand — and how AI can support the teams who would build them.
Watch: Nobel laureate busts the AI hype
Ensuring ethical use of AI
As enterprises implement AI in support of business goals, AI’s ethical concerns must be front and center, said Thomas H. Davenport, a digital fellow at the MIT Initiative on the Digital Economy. AI models can be biased, insensitive, or just plain ineffective. They can cause divisiveness and may be trained on user data.
Firms have long been aware of these risks. As far back as 2020, a Deloitte survey indicated that 56% of organizations were putting the brakes on AI adoption due to emerging risks. The same percentage worried that negative public perception of AI would hinder its adoption.
The same survey found that fewer than 40% of firms had taken steps such as training users to resolve ethical issues, ensuring that vendors were providing unbiased systems, or having a single executive manage AI initiatives and their risks.
“There’s a pretty convincing set of reasons for putting someone in charge of this issue,” Davenport said. “You can’t just assume it’s going to happen automatically.”
Along with having a head of AI ethics, Davenport recommended that enterprises:
- Create a board or committee to review which algorithms are used, and how.
- Document AI models that have been put into production.
- Keep models in beta testing longer because it helps teams fine-tune models to reduce bias, insensitivity, and hallucinations.
- Use external assessment tools to evaluate AI use cases.
- Ensure that corporate policies and the recommendations of review boards are enforced.
Watch: How to build an ethical AI culture

AI Executive Academy
In person at MIT Sloan
Register Now
Getting the most from AI investments
As if unpacking AI’s ethical challenges wasn’t enough, enterprises must also wrestle with generating value from AI investments. This is especially true for the large language models that power generative AI, according to MIT Sloan professor of the practice
There are three approaches to adapting LLMs for business use cases. The simplest is LLM prompts, or clear instructions a layperson could understand. Retrieval-augmented generation goes a step further to add content from proprietary sources. When prompts need to be engineered and RAG isn’t enough, fine-tuning models with hundreds or even thousands of domain-specific questions helps them generate more accurate answers.
Ramakrishnan recommends a three-step process to determine whether using an LLM is worthwhile:
- Break a job into discrete tasks. Some will be more amenable to automation than others: A professor might use AI to help prepare lectures, for example, but not to deliver them.
- Assess whether a task satisfies the generative AI cost equation. This considers the cost of using the AI tool, adapting it to how “correct” answers need to be (e.g., sales emails versus medical or legal decisions), detecting and fixing errors, and absorbing the cost of an incorrect answer. Firms should revisit the equation periodically. As LLMs mature, for example, what previously needed fine-tuning might now be done with basic prompts.
- Build and launch a pilot. Here, firms can work with a vendor, adapt a commercial LLM, or adapt an open-source LLM. Many firms use existing LLMs, but their key to success is adapting models for specific use cases and making it easier for them to catch errors — both of which drive down the cost.
Watch: Getting payback from generative AI