The financial industry is awash in data that can provide value to businesses in a variety of ways.
- Companies often buy data to make smarter decisions — but only if the value they get from the data is greater than the price they paid for it.
- Companies sell data, too — but they need to understand the value of the data so that they can put a price on it.
- Investors need to value firms — but they have to determine the value of all of the company’s assets: its buildings, machinery, human capital, labor and data.
Each situation poses its own complexities, but the fundamental question remains the same: How should data be valued? MIT Sloan professorand a team of researchers have created an approach to help answer that question.
“The past couple of decades have witnessed a huge rise in the use of data in different aspects of the economy, but we don’t know how to value this asset,” said Farboodi, a co-author of the recent paper “Valuing Financial Data.”
“Right now, some of the largest firms in the U.S. are valued heavily for the data they own, such as consumer data or production data.” That has contributed to “a huge dispersion” between their book value and the stock market value, Farboodi said, explaining that “accounting rules do not allow book value to include data, unless that data was purchased.”
Data is valuable to firms and investors; it decreases uncertainty by giving them more information to make decisions with. Not knowing how to value data has big implications for the economy as it leads to a mismeasurement in both a firm’s value and the aggregate U.S. GDP, she said.
To place a monetary value on financial data, Farboodi and her co-authors relied on an existing framework called the rational expectations equilibrium model.
Spanning the years 1985 to 2015, they used analysts’ annual earnings forecasts from the Institutional Brokers Estimate System for 5,506 firms to explore how investors’ valuations for standard data vary, based on investor characteristics. This data can be purchased by investors, who use it to form their own forecasts or beliefs about what they think the returns will be.
The authors used a sufficient statistics approach to estimate the model. They constructed the expected return and volatility of different portfolios using regression analysis and the data series that they wanted to value. Then they used these sufficient statistics to measure the monetary value of the data series to investors.
The value of data falls when markets are illiquid
While all investors like data for its ability to reduce uncertainty, the authors found that when markets are illiquid and there is a lot of price impact, data is valued less because it becomes harder and more expensive to execute profitable trades, and the value of financial data that informs these trades declines.
“Data helps a financial firm execute the profitable trades that others might not know about, but if the markets are illiquid, then it moves the price against the firm, and the firm cannot use the data very effectively,” Farboodi said. “Market illiquidity decreases the value of data for all investors, for all assets, for all investment styles, and for all levels of wealth.”
This decline is “orders of magnitude larger for wealthier investors with large portfolios,” Farboodi said. “Larger investors need to put larger trades, so when the price moves against them, they lose a lot,” she said. “The data loses its value to them a lot more.”
And if data is an asset whose value is very sensitive to liquidity, “that can lead to self-fulfilling cycles and financial fragility,” Farboodi said.
If firms have a large amount of data as part of their assets, “the firm will become less valuable” when even a small adverse shock makes markets illiquid. When firms become less valuable, investors who have these firms in their portfolios will offload them because they are no longer valuable firms, making these firms even less valuable, and the cycle continues: markets become fragile.
A framework to measure data
For policymakers, knowing how to properly value financial data is crucial in figuring out how to design policies that will regulate data and help figure out if and how much consumers should be paid for their data.
“The idea of thinking about a framework to measure data, to put numbers on this, is to be able to think about the magnitude of these forces,” Farboodi said. “If you want to devise a policy, you need to have the value of these forces, and that's what is very much missing in the data literature.”
In the future, Farboodi plans to build on her data research by exploring the value of data that firms produce, based on its importance to society and the economy, versus only valuable to the firm itself. Another idea will look at data brokers who sell anonymized customer data to firms who use it for their own purposes.
The research was authored by Farboodi; Laura Veldkamp, a professor of finance at Columbia Business School; Dhruv Singal, a doctoral student at Columbia Business School; and Venky Venkateswaran, an economics professor at New York University’s Stern School of Business.