HOME | NEWSROOM | PRESS RELEASES

MIT Sloan professor uses machine learning to design crime prediction models

CAMBRIDGE, Mass., April 23, 2015 –In the U.S., a minority of individuals commit the majority of the crimes. Currently, about two-thirds of released prisoners are arrested within three years of their release. To address the question of whether a particular person is likely to be arrested after released from prison, MIT Sloan School of Management Prof. Cynthia Rudin developed and used machine learning methods to create accurate, transparent and interpretable prediction models of recidivism.

“Recidivism prediction models have many important applications, such as allocating social services, policymaking, sentencing, probation, and bail,” says Rudin. “For sentencing and bail decisions, prosecutors and judges must work together to understand the risk posed by various individuals. These machine learning models ensure that everyone is on the same page.”

The problem of creating a small, interpretable model is computationally very hard, but the tools that Rudin’s team has developed are able to solve these problems. The models are easy to use and understand, and are so small that they can fit on an index card.

Rudin notes that predictive models for recidivism have a long history, particularly for parole and sentencing decisions. In the 1970s, the U.S. Parole Commission began using an actuarial measurement built from 2,497 prisoners to inform parole decisions. A follow-up study conducted in the 1980s led the U.S. Sentencing Commission to require the use of a predictive recidivism measure for sentencing.

“There has been some controversy as to whether accurate predictive models for recidivism need to be very complicated, or whether simple yet accurate ‘rules of thumb’ exist,” writes Rudin, who coauthored a paper on this topic with MIT students Jiaming Zeng and Berk Ustun. “Judges and prosecutors are unlikely to use a complicated black box predictive model where they can’t understand how the criminal history variables are used to predict recidivism.”

Rudin’s work shows that simpler, transparent, but equally accurate predictive models do exist. These models would be more usable and defensible for all decision-making parties.

Her machine learning models were created in a completely automated way from data, and use a dataset over five times as large as that of the USSC’s recidivism study, containing criminal histories from over 33,700 individuals. The automated models also have the ability to predict specific types of crime. These machine learning tools can be applied to predict various crime times, in different local areas, and among different populations. Each jurisdiction could create its own models.

Her models work by assigning points for various factors. If the points add up to a certain threshold determined by the prisoner’s history, then the individual is likely to commit another crime within three years. She points to the basic model used to predict arrest for a violent offense as an example. If the individual was younger than 17 at the time of release, two points are assigned (younger people are more likely to commit violent crime). If there are prior arrests for fatal violence, two points are assigned. If there are prior arrests for other types of violence, two points are assigned. If the age at the first confinement is greater than 40, two points are deducted. If the age at the first arrest was greater than 40, four points are deducted. If the total points add up one or more, then the individual is likely to commit another violent offense within three years. The variables and the points are determined entirely by the machine learning algorithm applied to the recidivism data, and not by hand.

“These models can be helpful or dangerous, depending on how you use them. This is not like ‘Minority Report’ where you are convicting someone of a specific crime they haven’t committed yet,” says Rudin. “Instead, these models simply quantify the fact that people who committed more crimes in the past are more likely to commit crime in the future.”

However, she cautions that if these models are not created carefully, they could be inadvertently used for discriminatory punishment. “For instance, one would not want to use race as a factor for a model that determines sentencing; we do not want to punish people longer because of their race.”

Rudin notes that her team chose not to include socio-demographic factors, and specifically excluded race as a variable. They did test how much more accurate the model would be by including race, but found that it was not particularly useful; the models were accurate without including race as an explicit factor.

These new machine learning methods have wide applicability. “There is no reason for people to design predictive models by hand anymore. Automated ones can be simpler, more transparent, easier to use, and just as accurate,” adds Rudin.

To read the full paper on this study, “Interpretable Classification Models for Recidivism Prediction” please visit: http://web.mit.edu/rudin/www/ZengUsRu15.pdf

For more information on Prof. Rudin, please visit: http://web.mit.edu/rudin/www/Mypage.html

The MIT Sloan School of Management is where smart, independent leaders come together to solve problems, create new organizations, and improve the world. Learn more at mitsloan.mit.edu.