Credit: Rob Dobi
Ideas Made to Matter
4 human financial services activities that AI can’t do
By
Over the course of 2024, fintech firm Klarna made waves by declaring its intent to cut its head count by close to 50% and put its AI chatbot in charge of 75% of customer service interactions. CEO Sebastian Siemiatkowski boasted that the chatbot was doing the work of 700 employees and was poised to improve profits by $40 million.
The Swedish company, which specializes in “buy now, pay later” plans, has since reversed course. In May, Klarna announced that it was lifting a hiring freeze that had been in place for roughly 18 months and recruiting for customer service roles.
“From a brand perspective, a company perspective, it’s so critical that you are clear to your customer that there will be always a human if you want,” Siemiatkowski told Bloomberg News.
That development comes as no surprise to MIT Sloan postdoctoral associate Isabella Loaiza, who studies how AI complements human workers in the labor market.
“When people are talking about money and they’re frustrated, they want to talk to a human,” she said. Klarna’s AI model reportedly stuck to its script, struggled with the nuances of customer service, and left customers feeling frustrated.
Last year, Loaiza and MIT Sloan professor Roberto Rigobon identified five sets of human attributes AI can’t provide, encapsulated in the acronym EPOCH:
- Empathy and emotional intelligence
- Presence, networking, and connectedness
- Opinion, judgment, and ethics
- Creativity and imagination
- Hope, vision, and leadership
Now they’ve released “The Limits of AI in Financial Services,” research that takes a closer look at the financial services industry and explains why firms such as Klarna, which placed big bets on AI, would be wise to think about how AI can redefine jobs instead of eliminating them.

AI Executive Academy
In person at MIT Sloan
Register Now
An industry familiar with disruptive tech
Decades ago, industry watchers predicted that the ATM would replace bank tellers completely; as it turned out, it merely took on the most monotonous part of their job. Instead of depositing and withdrawing money, tellers were able to provide more direct customer support.
Financial services today successfully use AI models to predict liquidity needs, assess credit risk, manage market fluctuations, execute trades, detect fraud, and provide real-time analysis of customer portfolios. But as with the ATM, there are tasks for which AI models cannot be a full substitute.
While AI can put workers at risk of replacement, Loaiza and Rigobon found that the tasks most dependent on EPOCH attributes were the least likely to be automated. In financial services, most of these irreplaceable tasks are related to building relationships with customers.
The 4 financial traits that AI cannot replace
TRUST. “You need people to be willing to trust you to give you their money, to trust that they’ll have access to the right types of financial tools,” Loaiza said.
In their paper, the researchers further explore the role of trust: “Delegating the investment of deposits and savings to an institution is the ultimate act of trust — and it is conceivable to argue that every single financial crisis, in the end, involves a violation of trust.” An AI agent may offer a positive customer experience, but it cannot fully develop trust until it can ensure that a customer’s information will not be used against them, they argue.
INCLUSION. Ensuring that people from underserved or disadvantaged communities have access to affordable and appropriate financial services “is an example in which AI algorithms fail terribly,” the authors write.
There are two reasons for this. One is that data on individuals who have been excluded from the financial system simply isn’t available to inform AI models.
The other, Loaiza said, is that machines aren’t good at making principled decisions, such as expanding access to home loans and other services in underserved communities, when “there’s no historical track record to say it’s a good idea.” (To extend the analogy, Loaiza added that AI models would have been unlikely to approve of granting women the right to vote, since there would be no precedent for it in the dataset.)
INNOVATION. Thanks to digital twins and other types of simulations, AI can do trial and error better, faster, and cheaper than humans can.
At the same time, there’s both purpose and randomness to innovation, which is “about having the ability to produce something new, something that is far from the data we have observed,” the researchers write. Exchange-traded funds are one example of a human-generated financial innovation.
This concept is closely tied to inclusion, Loaiza said. Part of it is recognizing that the status quo may not meet everyone’s financial needs — such as when it limits access to products and services.
There’s also the human capability to recognize when something new might not be right. Loaiza pointed to the controversy around Worldcoin, which offered cryptocurrency primarily to people in underdeveloped areas in exchange for their biometric data.
“You can have a good purpose, but you could be doing harm if you’re putting technology first instead of humans,” Loaiza said. AI modeling can serve as a tool to develop this kind of innovation, but the ethics behind it need to stay within the human scope of responsibility.
CUSTOMER EXPERIENCE. For customers, building a trusting relationship with a company is less about the answers they receive and more about the experience of interacting with the humans who work there.
Put another way, according to the paper, “Only in small stakes do we trust a chat box.”
When people want to know what’s going to happen to the money they are paying a financial firm to manage, they call their adviser. (The same is true in other fields, the authors note: When people have complex medical or legal questions, they call a doctor or lawyer.)
Here, EPOCH attributes come into play not only because of the importance of empathy, judgment, ethics, hope, and so on but because AI models work best when the present looks like the past, Loaiza noted. That reliance on precedent can pose a challenge when financial markets are uncertain and customers want reassurance.
“I see the value in using algorithms for trading,” Loaiza said, “but for other elements of the finance industry, we still need the human capabilities.”
Read next: These human capabilities complement AI’s shortcomings