While powerful large language models such as OpenAI’s GPT-4, Meta’s Llama, and Anthropic’s Claude are in increasingly high demand as foundational platforms for building a wide range of applications, success depends on how effectively people can use them. Many organizations are still looking for the right LLM strategy to drive productivity and promote innovative ways of doing business.
“LLMs are a platform technology,” said a professor of the practice at MIT Sloan. “And very often we can build applications incredibly fast using this technology.”
In a recent webinar hosted by MIT Sloan Management Review, Ramakrishnan outlined three ways businesses can use or adapt off-the-shelf large language models to perform tasks or address business use cases.
Method 1: Prompting
This is the engagement strategy most associated with generative artificial intelligence and what Ramakrishnan described as the simplest form of LLM adaptation. If a task can be accomplished by a layperson using common sense and everyday knowledge without additional specialized training or domain expertise required, then it might be enough to simply ask, or prompt, an LLM to do it.
Prompting is often enough to complete certain kinds of classification tasks with high accuracy, Ramakrishnan said.
For example, an e-commerce company might realize that product reviews left on the company’s website contain insights into potential defects or unpopular features — information that would have high utility for product teams. Instead of having to collect all the reviews, manually label and categorize them, and train a machine learning model with data, employees can feed reviews into an LLM and ask it whether the user feedback indicates a potential product defect.
Method 2: Retrieval augmented generation
Related Articles
In some cases, prompting won’t suffice because the task requires more current information or proprietary knowledge. In this case, companies can use retrieval augmented generation, or RAG — a clear instruction or question posed to an LLM, along with any relevant data or extra information, such as company policy or strategy documents or proprietary data from a database.
For example, a retailer building a customer service chatbot would want to make sure the chatbot was trained with relevant documentation, such as the company’s product return policy.
This method is effective and widely used by companies, Ramakrishnan said, in part because traditional enterprise search engines or information retrieval techniques can be used to find germane content from a large number of documents.
To make the best use of both prompting and RAG, companies need to help employees cultivate prompt-engineering skills. One method, Ramakrishnan said, is “chain of thought” prompting, where a user instructs an LLM to “think step by step.” This approach often leads to a more accurate outcome.
“We need to very carefully engineer the prompt to make sure the answer we get is in fact the answer we want,” Ramakrishnan said.
Method 3: Instruction fine-tuning
Instruction fine-tuning is useful for tasks that involve domain-specific jargon and knowledge or can’t be easily described — for example, applications analyzing medical notes or legal documents. These tasks are often too difficult for off-the-shelf LLMs.
With instruction fine-tuning, an LLM is further trained with application-specific question/answer examples, which results in modification of the model itself.
For example, an organization trying to build a chatbot that could help with medical diagnoses would need to compile hundreds of examples of questions and answers and feed them into an LLM. A question that included details of a patient’s case would be paired with a medically sound answer that included elaborate information about a likely diagnosis. This information would further train the LLM and increase the likelihood that it would provide accurate responses to medical questions.
Because this approach can be labor-intensive, some companies might choose to use LLMs to create the data used to train the model — a process known as synthetic data generation.

Leading the AI-Driven Organization
In person at MIT Sloan
Register Now
Finding the right approach to LLMs
As organizations delve deeper into LLMs and generative AI applications, they don’t have to pick and choose between these methods but rather should aim for a mix, depending on the use case, Ramakrishnan said.
“Prompting will be the easiest in terms of effort, followed by RAG and then instruction fine-tuning,” he said. “It’s more effort but hopefully more payback when you go that route.”
Watch the webinar: Getting payback from generative AI