Credit: Silver Place / iStock

Ideas Made to Matter

Artificial Intelligence

The rise of smaller ‘meek models’ could democratize AI systems

What you’ll learn: 

AI systems built or run with limited resources could soon perform on par with leading larger models while costing much less, according to new research. This could upend how companies build and use AI by boosting access and lowering barriers to innovation, but it will also make oversight and safety more difficult.  


Over the past several years, leading tech firms have poured billions of dollars into a simple concept: As artificial intelligence systems are given access to more data and computing power, their performance tends to improve. 

But new research from MIT FutureTech suggests that the “bigger is better” approach to AI development may be reaching the point of diminishing returns. Previous research has shown that as AI models grow, the performance gains from additional computing power start to fade, based on neural scaling laws. In their modeling, MIT researchers Hans GundlachJayson Lynch, and found that the decrease in performance gains is significant enough that companies will eventually see little comparative advantage from scaling their models much faster than other organizations.

Their findings suggest a future where AI systems built or run with limited resources — what they call “meek models” — could perform on par with today’s leading models, while costing far less. And that could upend how companies compete to build and use AI, as access to advanced systems becomes less about who can afford them and more about who can make the best strategic decisions.

The limits of scale

The implications of the new research were already visible earlier this year, when Chinese startup DeepSeek’s R1 model, trained for about $6 million, matched the performance of much larger systems, drawing global attention. Competitor OpenAI reportedly spent hundreds of millions of dollars training its GPT-4 model. 

In the researchers’ simulation, a model increasing its computing budget about 3.6 times each year initially outperforms a smaller, $1,000-budget model, but the gap peaks after roughly five years and then steadily narrows. 

“If you look at the laws that predict how these models will do, they have strongly diminishing returns,” said Lynch, a research scientist at the MIT FutureTech Lab. “You have to put in more and more compute to get less advantage.”

What this means for business strategy

Lynch said that for businesses, the competitive edge from building ever-larger models will be short-lived. “You can think of scale as a short-term strategy,” they said. “The question is what you can build on top of that advantage before it fades.” 

Companies that fine-tune models for specific uses or apply their own high-quality data will be better positioned than those building ever-bigger systems, Lynch noted.

They added that as powerful AI becomes more accessible, competition will also intensify. “Lots of people will have access to really strong models,” Lynch said. “That lowers barriers for innovation, but it also makes oversight and safety much harder.”

The researchers compare the trend to the spread of personal computers, suggesting that access to advanced AI could one day be as common as computer ownership itself. 

“Given that computers are also becoming less expensive and more widely used, it seems reasonably likely that a large fraction of the world’s population could have access to powerful deep learning models,” the researchers write. 

Broader access to advanced AI could bring productivity gains as more people and organizations share in its benefits.

Digital imagery juxtaposed with business leader and employees

Leading Technical Professionals and Teams

In person at MIT Sloan

As access widens, oversight lags

But wider access to AI tools also raises new challenges for oversight. Current policies — such as U.S. export controls on Nvidia’s GPUs — aim to limit who can build large “frontier AI” systems, often by restricting access to computing power. Yet the researchers found that such measures may no longer be enough as more efficient models start to perform much like today’s biggest ones.

“If you’re worried about models being unsafe or harmful, this suggests that in the not-too-distant future, lots of people will have access to really strong models,” Lynch said. “You can’t expect the largest models to stay significantly more powerful for long.”

The researchers describe a brief “governance window” — a period when large organizations still have an advantage — as a chance for regulators to develop stronger safety standards before advanced AI becomes ubiquitous.

They argue that future oversight should go beyond compute limits to focus on the data, algorithms, and safeguards that shape how AI systems are built and used.

Read the paper: “Meek Models Shall Inherit the EartH”


The MIT FutureTech lab is an interdisciplinary group that explores the economic and technical foundations of progress in computing.  

Hans Gundlach is a research assistant at MIT FutureTech. He has a background in AI, AI safety, and quantum computing research and wants to use these technologies to improve the future.

Jayson Lynch is a research scientist at MIT FutureTech who works on predicting the future progress of algorithms development and understanding the fundamental limitations to improving computing performance.

Neil Thompson is a principal research scientist at the MIT Computer Science and Artificial Intelligence Laboratory and the MIT Sloan Initiative on the Digital Economy, as well as director of MIT FutureTech. He studies technological innovation and firm strategy.

For more info Sara Brown Senior News Editor and Writer