recent

Accelerated research about generative AI

Disciplined entrepreneurship: 6 questions for startup success

Startup tactics: How and when to hire technical talent

Ideas Made to Matter

Artificial Intelligence

How can human-centered AI fight bias in machines and people?

By

Companies invested roughly $50 billion in artificial intelligence systems last year. That figure is expected to more than double, to $110 billion, by 2024. Such an explosion in investment raises a lot of questions, but central among them for MIT Sloan senior lectureris how to recognize and counteract the bias that exists within AI-driven decision-making.

“There has been a tremendous amount of research pointing out issues of algorithmic bias and the threat this poses systemically,” Gosline says in a new MIT Sloan Experts Series talk, available below. “This is a massive issue — one that I don’t think we can take seriously enough.”

In the discussion with data scientist Cathy O’Neil and Salesforce Architect of Ethical AI Practice Kathy Baxter, Gosline explains human-centered AI, the practice of including input from people of different backgrounds, experiences, and lifestyles in the design of AI systems. Prevailing wisdom assumes that the role of algorithms “is to correct the biases that humans have,” Gosline says. “This follows the assumption that algorithms can simply come in and help us make better decisions — and I don’t think that’s an assumption we should operate under.”

Rather, we need a thorough understanding of the biases that exist in both humans and algorithms along with the ways in which different groups place their trust in AI. With this information, decision-making processes can be designed in which algorithms and humans, working jointly to compensate for respective blind spots, can arrive at clearer and less prejudiced outcomes.

Watch the full talk below to learn more about:

  • The value of greater transparency around the data and assumptions that feed into AI models;
  • The need to address the vast information asymmetry between data scientists working in this field and everyone else whose lives are deeply affected by the work — those who “eat the output,” as Gosline puts it;
  • The ways in which Black people and people of color disproportionately “end up on the wrong side” of algorithmic bias;
  • The ability and responsibility of globally influential companies, like Salesforce, to use their position to advance more equitable approaches to AI.

“If we truly understand how these systems can both constrain and empower humans,” Gosline says, “we will do a better job improving them and rooting out bias.”

For more info Zach Church Editorial & Digital Media Director (617) 324-0804