Credit: Mimi Phan | Wesley VanDinter/AxPitel/iStock

Ideas Made to Matter

Artificial Intelligence

AI’s missing ingredient: Shared wisdom

By

What you’ll learn: 

  • Technological innovation works best when it’s grounded in collective wisdom.
  • We are in the fourth wave of AI. Developments in the 1960s, 1980s, and 2000s exerted enormous effects on commerce, government, and society but did not create a new AI industry.
  • Lessons from the unintended consequences of those earlier AI waves can help us build a digital society that protects individual and community autonomy. 

In his new book, “Shared Wisdom: Cultural Evolution in the Age of AI,” a Stanford HAI fellow and the Toshiba Professor at MIT, argues that we should use what we know about human nature to design our technology, rather than allowing technology to shape our society.

In the following excerpt, which has been lightly edited and condensed, Pentland examines the effects of earlier artificial intelligence systems on society and explains how we can use technologies like digital media and AI to aid, rather than replace, our human capacity for deliberation. 


The field of AI has had several periods of intense interest and investment (“AI booms”) followed by disillusionment and lack of support (“AI winters”). Each cycle has lasted roughly 20 years, or one generation. 

The important thing to notice is that even though these earlier AI booms are typically viewed as failures because they did not create a big new AI industry, below the surface each AI advance actually exerted enormous effects on commerce, government, and society generally, but usually under a different label and as part of larger management and prediction systems.

AI in 1960: Logic and optimal resource allocation

The first AI systems built in the 1950s used logic and mathematics to solve well-defined problems like optimization and proofs. These systems excelled at calculating delivery routes, packing algorithms, and performing similar tasks, generating enormous excitement and saving companies significant money.

Unintended consequences: When these successful small-scale systems were applied to manage entire societies under “optimal resource allocation,” the results were disastrous. 

The Soviet Union adopted Leonid Kantorovich’s system to manage its economy, but despite earning him a Nobel Prize, the experiment failed catastrophically and contributed to the USSR’s eventual dissolution. 

The core problem wasn’t the AI itself but the inadequate models of society available — models that failed to capture complexity and dynamism and suffered from misinformation, bias, and lack of inclusion.

AI in 1980: Expert systems

Expert systems replaced the rigidity of logic with human-developed heuristics to automate tasks where specialists were too expensive or scarce. Banking emerged as a major application area, with automated loan systems replacing neighborhood credit managers to achieve consistency and reduce labor costs.

Unintended consequences: While automating loan decisions brought uniformity, it eliminated community-specific knowledge and reinforced existing biases while limiting inclusivity. More damaging was the hollowing out of communities themselves — loan officers disappeared, along with credit unions and cooperatives. Bank branches became little more than ATM locations. The concentration of data and financial capital led to more than half of community financial institutions vanishing over subsequent decades. 

Additionally, centralization created increasingly complex, expensive, and inflexible systems that benefited large bureaucracies and software companies while leaving citizens lost in incomprehensible rules. Between 1980 and 2014, the percentage of companies less than a year old dropped from 12.5% to 8%, likely contributing to slower economic growth and rising inequality.

Hands holding pieces to a lightbulb puzzle

Innovation Executive Academy

In person at MIT Sloan

AI in the 2000s: Here be dragons

As businesses moved onto the internet in the late 1990s, an explosion of user data enabled “collaborative filtering”  — targeting individuals based on their behavior and the behavior of similar people. This powered the rise of Google, Facebook, and “surveillance capitalism.”

Unintended consequences: The collaborative filtering process created echo chambers by preferentially showing people ideas that similar users enjoyed, propagating biases and misinformation. Even worse, “preferential attachment” algorithms ensured these echo chambers would be dominated by a few attention-grabbing voices — what scholars call “dragons.” 

These overwhelmingly dominant voices in media, commerce, finance, and elections create a rich-get-richer feedback loop that crowds out everyone else, undermining balanced civic discussion and democratic processes. The mathematics of such networks show that when data access is extremely unequal, dragons inevitably arise, and removing one simply clears the way for another.

AI Today: The era of generative AI

Today’s AI differs from previous generations’ because it can tell stories and create images. Built from online human stories rather than facts or logic, generative AI mimics human intelligence by collecting and recombining our digital narratives. While earlier AI managed specific organizational functions, generative AI directly addresses how humans think and communicate.

Unintended consequences: Because generative AI is built from people’s digital commentary, it inherently propagates biases and misinformation. More fundamentally, it doesn’t actually “think” — it simply plays back combinations of stories it has seen, sometimes producing recommendations with completely unintended effects or removing human agency entirely. 

Since humans choose actions based on stories they believe, and collective action depends on consensus stories, generative AI’s ability to tell stories gives it worrying power to directly influence what people believe and how they act — a power earlier AI technologies never possessed. 

Companies and governments often present AI simulations as “the truth” while selecting models biased toward their interests. The rapid spread of misinformation through digital platforms undermines expert authority and makes collective action more difficult.

Conclusion

With some changes to our current systems, it is possible to have the advantages of a digital society without enabling loud voices, companies, or state actors to overly influence individual and community behavior.

Excerpted from “Shared Wisdom: Cultural Evolution in the Age of AI,” by Alex Pentland. Reprinted with permission from The MIT Press. Copyright 2025.


Alex “Sandy” Pentland is an MIT professor post tenure of Media Arts and Sciences and a HAI fellow at Stanford. He helped build the MIT Media Lab and the Media Lab Asia in India. Pentland co-led the World Economic Forum discussion in Davos, Switzerland, that led to the European Union privacy regulation GDPR and was named one of the United Nations Secretary-General’s “data revolutionaries,” who helped forge the transparency and accountability mechanisms in the UN’s Sustainable Development Goals. He has received numerous awards and distinctions, such as MIT’s Toshiba endowed chair, election to the National Academy of Engineering, the McKinsey Award from Harvard Business Review, and the Brandeis Privacy Award. 

In addition to “Shared Wisdom,” Pentland is the author or co-author of “Building the New Economy: Data as Capital,” “Social Physics: How Good Ideas Spread — The Lessons From a New Science,” and “Honest Signals: How They Shape Our World.”

Pentland also co-teaches these MIT Sloan Executive Education classes: 

For more info Tracy Mayor Senior Associate Director, Editorial (617) 253-0065