recent

Drawing a line from colonialism to artificial intelligence

How AI-empowered ‘citizen developers’ drive digital transformation

Our top 5 ‘Working Definitions‘ of 2024

Ideas Made to Matter

Computing

Should algorithms be in the driver's seat?

By

Give 50 insurance underwriters the same task, and the chances are pretty high none of them will do it exactly the same way as each of the others. That variability introduces what Nobel laureate Daniel Kahneman calls “noise,” and it can be costly for an organization. 

“In many occupations, a single person makes decisions on behalf of the whole organization, like a triage nurse in the emergency room. And if you have a lot of noise, it sets a ceiling about how accurate you can be,” Kahneman said, in a conversation with MIT Sloan professor Erik Brynjolfsson at the 2018 Digital Economy Conference in New York City on April 27. 

Kahneman, the author of the best-selling 2011 book “Thinking, Fast and Slow,” won the Nobel prize in economics in 2002 for his work challenging human rationality and judgment. A psychologist by training, he was the first non-economist to win the prize.

Noise is easier to measure than bias, though, and that makes it easier to control, he said. But the best way to do it is to turn that task over to a machine. Algorithms are effective at reducing noise, and thus reducing the costs associated with it, because each iteration of the same task is guaranteed to produce a consistent result, he said. 

“The big advantage that algorithms have over humans is that they don’t have noise,” Kahneman said. “You present them the same problem twice, and you get the same output. That’s just not true of people.” 

A human can still provide valuable inputs that a machine can’t — impressions, judgments — but they’re not particularly good at integrating information in a reliable and robust way, Kahneman said. The primary function of a person in a human-machine relationship should be to provide a fail-safe in an emergency situation. 

An example of that, he said, would be at a bank, where a machine is responsible for approving loans, but a human who notices that a person who the computer gave a loan to has been arrested for fraud in the past could override the algorithm. But, such cases are rare, he said.

“You can combine humans and machines, provided that machines have the last word,” Kahneman said.

He added: “In general, if you allow people to override algorithms you lose validity, because they override too often and they override on the basis of their impressions, which are biased and inaccurate and noisy.”