Action Learning

Artificial Intelligence

Generative AI to advance human rights

By

The International Center for Advocates Against Discrimination (ICAAD) is a human rights advocacy organization that equips those most harmed by systemic inequity with the necessary tools to drive systemic change. ICAAD empowers marginalized communities by building their capacity, strengthening their resilience, and fostering innovation. Working with MIT Sloan’s first cohort of Generative AI Lab (GenAI-Lab) students, ICAAD is accelerating its groundbreaking work, helping communities and governments improve the lives of those who need it most.

ICAAD’s flagship program, TrackGBV, is a one-of-a-kind, large-scale empirical study of judicial bias in domestic and sexual violence sentencing decisions across Pacific Island countries. Over the past decade, the center has manually reviewed more than 6,000 sentencing decisions, of which approximately 3,000 fit the research methodology and form the study’s core dataset. This analysis uncovered judicial bias in 52% of cases—contributing to reduced accountability for perpetrators and retraumatization for survivors. 

“Our data has informed judicial directives in Fiji and legislative reforms in the Solomon Islands and Vanuatu,” says Jyoti Diwan, ICAAD’s director of data analytics and insights. “We’ve also supported institutionalization of our methodology within women’s rights organizations, and developed the Pacific region’s only gender-based violence case law dataset, visualized through our TrackGBV Pacific Dashboard.”

ImpartialAI

To speed up this time-consuming process and expand the transformative model globally, the center launched ImpartialAI—a two-phase project, with the first being the automation, extraction, and evaluation of judicial bias using generative AI. With the help of MIT Sloan’s GenAI-Lab students, the center was able to test and validate a key component of the automation pipeline for ImpartialAI. The project was co-led by Shubham Kumar Jain, a full-time Adobe machine learning engineer volunteering with ICAAD, whose technical expertise has been instrumental in shaping ImpartialAI.

Shubham Kumar Jain | Machine learning engineer, Adobe; volunteer, ICAAD
This project offered a valuable applied learning opportunity in legal natural language processing for the students.
Jyoti Diwan | Director of data analytics and insights, ICAAD
[This project] supported a critical milestone in our TrackGBV automation journey—testing whether generative AI could extract features from gender-bias violence sentencing decisions with sufficient accuracy, based on what had previously required 3,000+ hours of manual legal review.

According to Diwan, the students’ work has directly informed ICAAD’s broader automation strategy, as it scales analysis across all seven countries in its pilot dataset. Ultimately, she predicts, the pipeline will power an advanced natural-language-based chatbot, equipping judiciaries with the tools to quantify the scope of discrimination within their courts, design targeted training programs, and shape judicial policy so that justice is consistent, transparent, and accountable. 

A tool for good

By grounding the students in a high-impact legal domain with deep ethical implications, the project gave them exposure to how generative AI can be applied responsibly in the nonprofit sector. 

“I realized AI isn’t just about innovation, it’s about amplifying the voices of those already working for change, especially in areas like gender justice,” says Eliza Weaver, MBA ’26. “I’m leaving the lab more committed to applying technology with purpose and responsibility.”

Ziyu (Christina) Ye | MBA '25
This project reflects my broader mission, which is to use my technical and product skills—and AI for good—to improve lives and promote equity.

As ICAAD continues to refine the pipeline that the MIT team helped implement, one MIT Sloan Master of Finance student, Selina Liang, MFin ’25, was so inspired by the project that she continues to volunteer to support the ongoing work.

“Working with ICAAD, I realized how AI can bridge gaps in access to justice—especially for marginalized communities,” Liang says. “Seeing our technical work directly contribute to identifying gender biases in legal systems reinforced my belief in AI as a tool for social impact.”

Next-gen GenAI

Since the conclusion of the MIT project, ICAAD has built on the foundation the students helped solidify to expand ImpartialAI’s coverage and tackle more complex prompts and features that require deeper domain understanding. For MIT Sloan Master of Business Analytics student Andrés Camarillo, MBAn ’25, the GenAI-Lab experience gave him a glimpse into the future. 

“Just a few years ago, large language models weren’t accurate or nuanced enough to handle tasks like this one, which require careful instruction-following and legal context. But they are now,” Camarillo says. “And as they keep improving, I plan to keep an eye on the meaningful problems they become newly capable of solving.”

As generative AI evolves, it will, like any new technology bring many challenges and opportunities. As its uses rapidly expand, many GenAI-Lab students will work to harness its power for good. 

Albert Scerbo | Associate director, MIT Initiative on the Digital Economy
There's a lot of reasonable consternation about generative AI from the nonprofit sector. But I think that this project shows very concretely the potential for thoughtful, responsible AI to really accelerate projects that improve the human condition.