What you’ll learn:
- Data deserts are markets where limited, fragmented, or low-quality data makes it difficult for AI systems to produce accurate predictions.
- Data deserts can distort market visibility and limit how organizations identify and reach certain groups.
- What’s needed is regulation, but in the meantime, business leaders must ensure that AI systems do not narrow the field of opportunity or miss emerging markets.
Artificial intelligence systems increasingly shape who gets a loan, which job candidates are shortlisted, and which small businesses are surfaced in online marketplaces. But what happens when certain people or communities simply don’t show up in the data?
In a recent policy proposal published by the Brookings Institution, MIT Sloan professor describes a growing challenge: algorithmic exclusion driven by data deserts. These are markets or populations where limited, fragmented, or low-quality data make it difficult for AI systems to produce accurate predictions or to consider certain groups at all.
For business leaders, the issue is not abstract. Data deserts can distort market visibility and limit how organizations identify and reach certain groups.
Exclusion is not the same as bias
In “Artificial Intelligence and Algorithmic Exclusion,” Tucker writes that conversations about responsible AI have largely focused on bias — whether algorithms treat different groups unfairly. But algorithmic exclusion is different: It occurs when systems cannot make a prediction at all because the underlying data is too sparse or fragmented.
Examples include a credit model that can’t score a “thin-file borrower” or similar systems that rely on historical data but fail to generate predictions for individuals with limited digital records.
The distinction matters because exclusion can be harder to detect. A biased model produces a result that can be audited and challenged, but an excluded individual may never receive a score, a recommendation, or even visibility in the system.
Why data deserts are a business issue
For business leaders, that invisibility has consequences. Algorithmic data deserts are often discussed in social or policy terms, but they also create operational and strategic risk, Tucker writes.
When AI systems rely on rich historical datasets, they tend to perform best for customers who have already generated abundant digital signals. That can leave emerging segments, small businesses, or lower-income consumers underrepresented because they lack data.
Among the risks:
- Missed market visibility. Organizations may overlook customers or suppliers simply because their models cannot confidently classify them.
- Distorted metrics. Internal metrics may reflect only the “data rich,” creating a skewed view of the market.
- Emerging compliance exposure. As policymakers expand the definition of algorithmic harm to include exclusion, organizations may face increased scrutiny.
Leading the AI-Driven Organization
In person at MIT Sloan
Register Now
What leaders can do today
Tucker’s report focuses on policy responses, including improving data portability, encouraging greater data access in concentrated markets, and expanding high-quality data sources for underserved groups. The goal is to reduce structural barriers that prevent individuals and firms from being visible in algorithmic systems.
But policy reform can take time, and business leaders are already deploying AI systems. In the absence of regulatory requirements, organizations can still take steps to identify and mitigate algorithmic exclusion.
- Audit for missing predictions. Tucker emphasizes that exclusion occurs when systems fail to generate outputs at all. Measure who isn’t receiving predictions and treat that absence as a failure condition.
- Conduct representation audits. Compare the population that a system is meant to serve with those reflected in its data and outputs to surface who is underrepresented or invisible.
- Identify and map data deserts. The report highlights data deserts — areas where data is sparse, fragmented, or low quality — as a key driver of exclusion, making it critical to understand where coverage gaps exist.
Data deserts are often discussed as an important limitation of AI: More automation doesn’t automatically mean broader participation. In some cases, it can amplify existing visibility gaps.
For business leaders, the challenge is to ensure that AI systems don’t narrow the field of opportunity. By auditing for missing predictions and broadening awareness across teams, organizations can reduce exclusion risk while also uncovering untapped markets.
Catherine Tucker is a professor of marketing at MIT Sloan, faculty director of the school’s Executive MBA program, and a co-founder of the MIT Cryptoeconomics Lab. Her research focuses on the interface between marketing, the economics of technology, and law, with an emphasis in online advertising, digital health, social media, and electronic privacy. Her recent work includes an exploration of generative AI and weight loss and the ways global platform governance relates to digital apps for children.