Artificial Intelligence at MIT Sloan

Last September, MIT President Sally Kornbluth wrote to the community about her priorities for the Institute. In addition to climate change, she highlighted the significance of generative artificial intelligence (AI) technologies, their potential to greatly impact the world, and the need to empower the members of the MIT community to contribute.

“It’s vital to make sure the people of MIT have the opportunities, resources, and connections to contribute their knowledge and insight,” said President Kornbluth.

As demonstrated below, MIT Sloan students, faculty, and alumni have been—and continue to be—leading the charge with the creation of powerful and impactful innovations in the AI space.

Advanced machine learning algorithms can help workers with increasingly complex tasks, including decision-making, but the problem of explainability persists. In other words, it is difficult to determine precisely how generative AI technologies like ChatGPT can do what they do.

Despite this gap in (human) knowledge, however, research by Kate Kellogg, PhD ’05, (David J. McGrath jr (1959) Professor of Management and Innovation; Professor, Work and Organization Studies) and her colleagues indicates that people are more likely to second-guess recommendations by an interpretable algorithmic model than those by an uninterpretable one.

When people can see how an algorithm works, they might believe that they understand those inner workings better than they actually do. To counter this, the researchers suggest that business leaders include respected peers in the development and testing process of the algorithm to increase the likelihood that employees will accept its recommendations.

Last May, Simon Johnson, PhD ’89, (Ronald A. Kurtz (1954) Professor of Entrepreneurship) and Daron Acemoglu (Institute Professor) published their new book, Power and Progress: Our 1,000-Year Struggle Over Technology and Prosperity. Through a survey of major technological advancements and the economic progress they enabled, Acemoglu and Johnson ponder the question of who benefits—especially considering the current trends in AI.

These gains typically are not shared widely but instead are concentrated among preexisting economic and political elites. Judging by the increasing development of advanced algorithms and AI technologies to improve automation, the authors suspect this history is repeating itself.

“We’re suggesting we can get back onto that path of shared prosperity, reharness technology for everybody, and get productivity gains,” says Johnson. “We had all that in the postwar period. We can get it back, but not with the current form of our machine intelligence obsession. That, we think, is undermining prosperity in the U.S. and around the world.”

According to research by Danielle Li (Associate Professor, Technological Innovation, Entrepreneurship, and Strategic Management), PhD candidate Lindsey Raymond, and Stanford University professor Erik Brynjolfsson, PhD ’91, newer or low-skilled workers stand to benefit more from generative AI than their more experienced counterparts.

In a working paper for the National Bureau of Economic Research, the co-authors found that contact center agents with access to an AI assistant saw a 14% boost in productivity. This was especially true for new or low-skilled workers, who were essentially upskilled by the technology instead of being replaced by it.

“Generative AI seems to be able to decrease inequality in productivity, helping lower-skilled workers significantly but with little effect on high-skilled workers,” Li said. “Without access to an AI tool, less-experienced workers would slowly get better at their jobs. Now they can get better faster.”

When Manish Raghavan (Drew Houston (2005) Career Development Professor; Assistant Professor, Information Technology) taught 15.563 Artificial Intelligence for Business last spring, he was not trying to teach MBA students about the technical rigors of computer engineering or machine learning. Rather, he wanted them to know how to make good business decisions about deploying—or not deploying—AI-based products and services by considering the practical, social, and ethical challenges this would entail.

“Most classes in computer science departments teach you how to solve a particular problem,” says Raghavan, whose faculty appointment is shared with the MIT Schwarzman College of Computing. “The course I wanted to teach was, if you could solve that problem, what would you do with it? What would you do in the real world with this magical tool?”

Sally Kornbluth | MIT President
It’s vital to make sure the people of MIT have the opportunities, resources, and connections to contribute their knowledge and insight.

Since most conversations about deploying AI focus on its potential use by the workforce, this suggests that the “demand side” of the equation is left by the wayside, according to Yunhao Zhang, SM ’20, PhD ’23. To counter this, Zhang and Renée Richardson Gosline (Senior Lecturer, Marketing; Research Scientist, MIT Initiative on the Digital Economy) collaborated on research concerning how people perceive work created by generative AI, humans, or both.

When people knew a product’s source, they expressed a positive bias toward content created by humans and, contrary to the traditional idea of “algorithmic aversion,” no aversion toward AI-generated content. When respondents were not told how content was created, they preferred AI-generated content. “We ought to try to understand as much as we can about the ways people think about AI, given how quickly everything is moving,” said Gosline.

Despite attempts by world governments to address new AI technologies with new standards and assurance mechanisms, these efforts are limited by existing legal frameworks incapable of handling—let alone fully understanding—what is going on. As such, Thomas W. Malone (Patrick J. McGovern (1959) Professor of Management; Director, MIT Center for Collective Intelligence) and his colleagues believe the time is right for the creation of a global AI observatory (GAIO).

Much like the Intergovernmental Panel on Climate Change formed by the United Nations in 1988, a GAIO will be more capable of identifying risks, opportunities, and developments, and predicting AI’s possible global effects. It can accomplish this by creating a global, standardized incident-reporting database; assembling a registry of crucial AI systems with the largest social and economic impacts; bringing together global knowledge about the impacts of AI on critical areas; and organizing global debate through an annual report on the state of AI.