Credit: Mimi Phan | Andrey_Popov/Shutterstock
Large language models can help professionals identify customer needs
By
What you’ll learn:
Determining what customers need is often a long, tedious process for professional analysts. A new study found that properly trained large language models can perform as well as experts at identifying customer needs, freeing analysts to apply their expertise to other areas.
Behind every successful product is a company that correctly identified customers’ needs. That entails the time-consuming task of combing through market-research interview transcripts, online reviews, and other data sources to pinpoint true customer needs — that is, the “job to be done” by the product or service.
Professional analysts can spend as much time reviewing documents as they do talking to customers, according to MIT Sloan professor of marketing “The meat is in the details,” he said. “Customers tell stories about how a product works or doesn’t work. They don’t tell you what a product should look like.”
Artificial intelligence might help. A new paper, co-authored by Hauser, Northwestern professor Artem Timoshenko, PhD ’19, and MIT Sloan PhD candidate Chengfeng Mao, found that fine-tuned large language models performed as well as expert analysts at identifying and categorizing customer needs. Using LLMs for that task can free up analysts to apply their expertise to other areas, such as synthesizing insights, innovation, product development, marketing, and launch.
“We hear a lot about AI automating tasks and eliminating jobs,” Hauser said, “but this makes professional analysts better at their jobs.”
How do experts identify customer needs?
Hauser has been studying the process of identifying customer needs since the early 1990s. That’s when total quality management frameworks took hold to standardize the process of better understanding the voice of the customer. Standards may have evolved in the decades since, but workflows have not. “There’s a lot of tedious work before you can get to the effective work,” Hauser said.
Analysts trained to identify customer needs begin by gathering information. They interview customers by the dozens and read product reviews by the thousands. Recently, they’ve begun to search online for social media posts, video testimonials, and other user-generated content.
In reading the material, “the challenge lies in understanding the deeper motivations that drive customer behavior and capturing these motivations in a concise and efficient form,” the researchers write.
From there, an initial list of customer needs must be winnowed down and organized into an affinity diagram that creates a hierarchical structure of primary and secondary needs. This process groups customer needs into strategic categories and eliminates redundancy. A company making paint and stain products, for example, may have a group of needs called “product appearance and finish” that includes customer needs such as “assured my stain is consistent in color and texture” and “assured the color matches exactly what I am looking for.”
The tedious part of the work is separating needs (which customers rarely state explicitly) from opinions (which customers are all too happy to share). For example, when a customer praises a wood stain because “it goes on pink,” this highlights a need to see what surfaces have already been covered by the stain. A complaint about a smartphone’s battery life hints at a need for long, uninterrupted use while away from home.
LLMs performing as well as experts
Professional analysts excel at their jobs thanks to years of training, where they learn to formulate customer needs “as concise positive statements using simple, accessible language,” the researchers write.
A study found that fine-tuned LLMs identified 100% of primary customer needs, compared with 87.5% identified by professional analysts.
To try to replicate this training, the researchers began with a general-purpose model. They further fine-tuned the LLM with “voice of the customer” studies supplied by a market research firm with 30 years of experience. This process is referred to as supervised fine-tuning, because the model is fine-tuned using manually curated training examples — in this case, market research studies. The process also helps standardize the structure of prompts, which alleviates the pressure on end users to carefully engineer prompts to ensure that they get the desired output.
A series of studies compared the output of professional analysts, base LLMs, and LLMs enhanced with supervised fine-tuning in identifying and formulating customer needs. The original study was based on wood-stain products but has since been replicated in other product and service categories.
Base LLMs on their own are insufficient for replicating the results of professional analysts — “they miss a lot of the nuance,” Hauser said. However, LLMs augmented with supervised fine-tuning performed as well as or better than human workers.
Notably, the model identified 100% of both primary customer needs (eight in all) and secondary customer needs (30 all together), compared with 87.5% and 80% for the professional analysts, respectively, who missed the primary need for a product that makes wood maintenance easier.
Strategy, Survival, and Success in the Age of Industrial AI
In person at MIT Sloan
Register now
Democratizing a resource-intensive process
Fine-tuning is extensible as LLMs can be taught to identify customers’ needs without having a fundamental understanding of them. “If you have to pull customer needs out of a story, the supervised fine-tuned LLM can do it,” Hauser said. “But if you ask an LLM what customers care about when staining a deck, its answers are superficial.”
The supervised fine-tuned model has been tested on a variety of base LLMs. This highlights the potential for organizations to use the model to study proprietary datasets, especially given that the researchers found that just over 1,000 previously identified customer needs are enough for supervised fine-tuning.
Related Articles
In addition, the model has been successfully applied to use cases in the food, building supply, and healthcare industries. One user, the Product Development and Management Association, analyzed conversations with people who had recently joined the organization.
“It gave [the PDMA] a different framing for bringing in new members and running their conferences,” Hauser said. In particular, the organization learned that new members found the “special language” of veterans off-putting, so it worked to reduce the use of jargon and acronyms in member resources.
Given the time and resources required, the process of identifying customer needs is often limited to enterprises. For Hauser, the paper showed the potential for democratization.
Students in MIT Sloan’s Listening to the Customer course have been able to identify customer needs in mere minutes by uploading transcripts of customer interviews into the model, Hauser said. That will prove valuable as they transition from students to founders.
“Entrepreneurs don’t have the money to spend on a big study, but they do have the time to talk to consumers and record the conversations with their consent,” Hauser said. “The fine-tuned LLM identifies customer needs quickly and accurately from transcripts of these conversations — a task that would otherwise require extensive training and experience.”
John Hauser is the Kirin Professor of Marketing at the MIT Sloan School of Management, where he teaches new product development, marketing management, and statistical and research methodology. Artem Timoshenko, PhD ’19, is an associate professor at Northwestern University. His research examines how emerging technologies are reshaping innovation. Chengfeng Mao is a PhD candidate at the MIT Sloan School of Management. His research focuses on enhancing the effectiveness of large language models for enterprise applications through post-training techniques and agentic frameworks. The research collaborator was Applied Marketing Science Inc.