recent

3 ways companies can scale emissions reduction

Women’s career advice: Remember that exhaustion is not a yardstick for productivity

How, and why, to run a values-based business

Credit: Rob Dobi

Ideas Made to Matter

Artificial Intelligence

This 2020 paper details SEC chair’s concerns about AI and finance

By

Artificial intelligence has the potential to transform finance, but it could spell trouble without proper oversight, according to Gary Gensler, chair of the U.S. Securities and Exchange Commission.

This summer, Gensler became one of the first regulators to propose rules for AI to address conflict-of-interest concerns raised by financial firms’ use of predictive data analytics. In recent weeks, he has continued to talk about the need to minimize AI’s risk to investors.

Gensler spoke to The New York Times about the problems that can arise when the U.S. relies on just a handful of foundational AI models. Doing so could cause another financial crisis, given the deep interconnections across the economic system and the potential for “herding” — a scenario in which people rely on the same information and thus respond in a similar fashion.

In an interview with Bloomberg, Gensler called artificial intelligence “the most transformative technology of this generation” while warning that it could cause “the crisis of 2032 or 2028.”

While a professor at MIT Sloan, Gensler explored these issues in-depth in a 2020 paper, “Deep Learning and Financial Stability,” written with then-research assistant Lily Bailey, who is now special assistant to the chief of staff at the SEC. The paper outlines five pathways whereby broad adoption of deep learning, a subset of AI, could increase fragility in the financial system.

Here are the authors’ areas of concern.

Data

Across different sectors of the economy, Gensler and Bailey noted coalescence around important datasets. For example, Google Maps, Google Earth, and affiliate Waze dominate traffic datasets and the route optimization business. Having this kind of uniformity presents a huge risk.

“Models built on the same datasets are likely to generate highly correlated predictions that proceed in lockstep, causing crowding and herding,” the authors write.

Model design

Models can lead to systemic risks like those that caused the 2008 financial crisis. The authors cite the financial sector’s overreliance on credit agencies Standard & Poor’s, Moody’s, and Fitch to underwrite collateral debt obligations.

The authors caution that the unique attributes and construction of deep learning models increase sensitivity and could lead to a higher prominence of black swan events. When models coordinate and communicate with each other to optimize results, it’s possible that they will execute the same strategies, increasing volatility.

Regulation

Related Articles

4 questions to ask before swapping out human labor for AI
The legal issues presented by generative AI
SEC’s Gary Gensler on how AI is changing finance

Existing financial sector regulations will probably fall short in addressing the risks posed by deep learning, the authors write.

The adoption of deep learning in finance is likely to be uneven among different parties, with less-regulated fintech startups likely to move faster than both large, regulated financial institutions and smaller community institutions that don’t have the resources to independently adopt deep learning.

The authors note that this “tiered adoption” could lead to regulatory arbitrage, whereby certain activities within the financial sector migrate to less-regulated actors.

Algorithmic coordination

The authors raise the possibility that algorithmic coordination could lead to increased network interconnectedness if models at different financial firms communicate with each other, again increasing the likelihood of herding behavior.

The authors also write that the regulatory tools used to identify algorithmic coordination might not be able to discern deep learning coordination until after the fact. Complexity presents big problems: Without the ability to understand and explain the inputs and outputs of deep learning models, regulators are at a disadvantage when trying to identify and thwart algorithmic coordination.

User interfaces

Deep learning is widely used in the user interface space and in customer interaction. This includes platforms that provide automated advice and recommendations for investing, lending, and insurance.

The authors caution that the standardization of virtual assistant software, such as chatbots that provide investment advice, could cause herding across client decision-making — potentially across an entire asset class or sector. Many large financial institutions have rolled out proprietary virtual assistants, and fintech startups often rely heavily on chatbots and virtual assistants, they note.

While there is plenty of opportunity for AI to help companies better serve their clients, AI can easily mask who’s at fault and what exactly happened in the event of a crisis, the authors write in their conclusion. A better understanding of the technology and proper regulation are essential to keeping investors safe.

See the research: “Deep Learning and Financial Stability”

For more info Tracy Mayor Senior Associate Director, Editorial (617) 253-0065