Credit: SCOTTCHAN / everythingpossible / Shutterstock

Ideas Made to Matter

Artificial Intelligence

How to find the right business use cases for generative AI

By

While generative AI can be helpful to businesses, the technology has some notable shortcomings, including a propensity to get simple things wrong and occasional difficulty with basic logic. Given that, how should organizations think about finding the right use cases to effectively harness generative AI for sustainable business advantage?

During a webinar hosted by MIT Sloan Management Review, MIT Sloan professor of the practice laid out a three-step approach to help enterprises identify the best generative AI use cases and automate some parts or all of a business process. He also provided practical best practice advice to help organizations effectively realize benefits from generative AI while avoiding common pitfalls.

“There are a host of issues to be worried about when using [a large language model] … and there are no bulletproof solutions just yet,” Ramakrishnan said, adding that research organizations and the vendor community are making significant progress on addressing them. “Given all the issues, the big question is, how should we be thinking about using LLMs for business productivity?”

3 steps to identifying business use cases for LLMs

Ramakrishnan suggests taking the following steps to determine which knowledge work business processes would be best served by generative AI automation:

Break workflows and jobs into tasks. Jobs are collections of discrete tasks that vary in terms of how well they can be automated with generative AI. For example, an occupational database from the U.S. Bureau of Labor Statistics associates 25 tasks with being a university professor, and only some of them can be easily automated. Preparing course materials and assignments, grading student work, and readying lectures are tasks that can be partially automated, but moderating classroom discussions or giving lectures doesn’t translate well to an LLM use case. “That’s why you need to go through the trouble of breaking jobs up into individual, discrete tasks,” Ramakrishnan said. “Some things are easy with an LLM while other things are really hard.”

Assess tasks using the generative AI cost equation. It’s important to consider all of the potential costs associated with automation. There are the obvious costs of using an LLM, such as paying licensing or API fees. But there are also less-obvious costs that could be even more significant, including the time, effort, and money needed to adapt a generative AI tool to the required degree of correctness for the task at hand, and to create mechanisms to detect and fix errors.

Related Articles

Machine learning and generative AI in 2025
3 ways businesses can use large language models
Generative AI as a platform for applications development

The costs of tasks can differ based on how accurate an LLM needs to be and whether the use case has a margin for error. Some tasks, like writing ad copy, product descriptions, or a movie plotline, have slightly more room for error. Uses that require logical reasoning or factual knowledge; encompass cause-and-effect relationships; or have high stakes, like medical care, demand more accuracy. These cases require a robust mechanism to monitor and fix LLM output — often, a human in the loop. This adds significant effort and potential expense, Ramakrishnan said. The possibility that an error might slip by human monitors, causing brand damage or reputational risk, adds another potential cost factor into the mix.

Once such costs have been identified, organizations should weigh the generative AI cost equation against the cost of doing business as usual (without generative AI) and determine which is smaller. And, given the pace of change in the market, something that doesn’t make sense to automate now could be more easily automated sometime in the future.

“If you apply the equation to a particular task and it doesn’t pass because the costs are too big, you should probably revisit it periodically because as LLM capabilities steadily improve, the cost of adoption is decreasing quite a lot,” Ramakrishnan said.

Build, launch, and evaluate pilots. If the first two conditions are met, the final step is to turn experimentation into action. Companies can take different approaches to pilots — such as using application vendors, adapting a commercial model like GPT-4, or adapting an open-source LLM like Llama 3.

Software vendors are also rushing to infuse generative AI into existing products, as evidenced by the rise of AI copilots for knowledge work, a trend that is helping to accelerate generative AI deployment.

Companies should establish a rigorous evaluation process when building LLM-based applications because it can be more difficult and riskier than building a machine learning-based predictive AI application, Ramakrishnan said.

A lightbulb with the abbreviation "Ai" on it seems to be flying like a rocket ship

AI Executive Academy

In person at MIT Sloan

Best practices for LLM use

Once companies have taken those three steps, they can follow some best practices to ensure a successful generative AI implementation, Ramakrishnan said:

  • Ensure that you have a rigorous evaluation process when building or evaluating LLM-based applications.
  • Don’t rush into production without a robust mechanism for checking and fixing errors. Having a human in the loop can be costly, but catching problems before a tool is deployed or released to customers is worth the expense.
  • Consider narrow use cases, especially if you’re running a small business. More-targeted tasks require smaller LLMs, which usually means less cost and easier training and maintenance.
  • Find and train talent outside of the traditional data science organization. It’s important to identify and nurture people throughout the ranks who have an interest in generative AI and continuously build their skill sets, Ramakrishnan said. “There’s … talent hiding in the enterprise,” he said, and using LLMs with prompts doesn’t require a strong technical background.
  • Set expectations for ROI by prioritizing obvious use cases that will ensure quick payback and serve as a valuable learning exercise. Ramakrishnan noted that most organizations are focusing on business productivity for their first wave of LLM adoption.

“The way to get past that dichotomous, paralytic state is to say we are going to do low-stakes, easy things first and see what happens, but we are going to do lots of them very quickly,” Ramakrishnan said.

Watch the Webinar: Getting payback from Generative AI

For more info Sara Brown Senior News Editor and Writer