Credit: dampoint / iStock
There’s a huge appetite for artificial intelligence algorithms to generate predictive insights and achieve efficiency and productivity gains. But as AI is used in new ways, it’s important to explore the technology’s broad implications.
MIT Sloan professor whose faculty appointment is shared with the MIT Schwarzman College of Computing, has been working to understand the societal implications of AI, which can be challenging: The technology is being applied to solve a diverse set of problems that involve varying levels of risk.
“If we put AI in a medical domain, that’s going to look very different than putting it in an employment domain and different too from building code assistance for software engineers,” Raghavan said during a talk at the 2025 MIT AI Conference. “There are different risks and idiosyncratic problems. We need to incorporate domain expertise in order to make good decisions.”
Raghavan discussed how his research in using AI for medicine, hiring, and creative pursuits shows both the promise of the technology, such as making better decisions and addressing bias, and some limits and concerns, such as a lack of creativity.
AI can help humans make complicated decisions
Can AI be trusted to help with important decisions, such as some of the difficult choices medical professionals must make? Raghavan and co-researchers at MIT and the Yale School of Medicine looked at using AI to help doctors determine which emergency room patients should be admitted to the hospital and which can be sent home. While there are standards of care to guide triage decisions, doctors have discretion to act based on information they’ve gathered themselves. At risk in these decisions is sending home people who need urgent care or using limited resources and physician hours on people who do not need such care.
AI algorithms and predictive models can be used to create risk scores for a patient based on vital statistics, but physicians also use their own experience and observation skills to make these decisions. AI-driven risk scores are not connected with the physicians’ real-world observations, which makes it difficult for algorithms to fully consider complex trade-offs. “Doctors and AI make different mistakes, have different sources of information,” Raghavan said.
The researchers created a framework that combines both sources of information to address these limitations. They found that algorithms are good at grouping individuals who seem to have similar levels of risk and especially adept at identifying high-risk patients. Physicians excel at finding patients who present as medium risk but might actually be higher risk based on observable information or characteristics that are not readily available to algorithms.
Combining both sources of information “lets us make decisions that are better than either algorithms or doctors in isolation,” Raghavan said. “We make fewer mistakes, and we can automate a large fraction of cases and maintain the quality of our decisions.”
There are ways to address algorithmic discrimination
Companies are using AI for several aspects of the hiring process, from sorting through resumes to using AI screening for preliminary interviews. There are also several risks associated with using AI in hiring — notably, discrimination, which is a widespread concern when using algorithms. When AI is used to make hiring decisions, algorithms might explicitly treat different people differently based on demographic data. In other cases, companies might look back to find unjustified discrepancies in the rates at which people from different demographics were hired.
But unlike human recruiters, AI algorithms can be audited before they are used for hiring — something that is legally required in some places, including New York City, Raghavan said. While these pre-deployment audits can prevent biased decisions in a way that wouldn’t be possible with a human recruiter, they raise new challenges, such as how to effectively measure discrimination, he said.
Different AI models vary in what they do best, and people often focus on identifying the most useful or accurate model, Raghavan noted — but it is also possible to look for and use the fairest model. “If you want an accurate model that also doesn’t discriminate, that is something you can specify in your model search,” he said.
AI can be used for creative pursuits — but monoculture is a concern
Related Articles
As humans move from using AI for prediction to using it to produce content, the opportunities are vast.
The downside of using AI for creative pursuits is homogeneous production, Raghavan said. People using AI tend to produce more ideas, but the ideas they generate are similar because they have used similar tools in similar ways. “We have this monoculture created by the use of AI,” Raghavan said.
In a recent paper, Raghavan looked at the balance between idea quality and creativity, using the board game Scattergories to validate his results. In the game, players are asked to come up with a word that fits a given category and starts with a given letter — for example, an animal with a tail whose name starts with an L. Players are penalized for having the same answer, so it is important to be both accurate and creative.
Raghavan found that the more creative AI models were programmed to be, the less accurate they became. He also found that creativity is rewarded when there is stronger competition. There is further work to do in terms of evaluating the quality of AI models in creative and competitive settings, Raghavan said.
The key takeaway from this and the other areas of research is that “it’s important to try to identify these high-level risks or new phenomena that can occur when we put AI in society,” Raghavan said. “Things occur when you use AI in a setting that you weren’t before.”

AI Executive Academy
In person at MIT Sloan
Register Now