Machine learning tools are used in a variety of fields, from sales to medicine. But getting tech into the workplace is just one step — these tools are only successful if they’re integrated into workflows, and if people trust them enough to depend on them.
A key to successful adoption is back-and-forth dialogue between technology developers and end users, according to new research from MIT Sloan professorSara Singer of Stanford University, Ari Galper of Columbia University, and Deborah Viola of Westchester Medical Center. The paper was published in Health Care Management Review.
Deploying workplace tools is often seen as one-directional — developers make them and hand them off to users. This doesn’t work for machine learning tools, the researchers write, because of the data required to train machine learning models and the opacity of the models. Both developers and end users might not understand how time-intensive this process is, and how much patience and input is required.
Kellogg and her co-authors looked at how two machine learning-based clinical decision support tools were rolled out at the Westchester Medical Center Health Network in New York state. Clinical decision support tools get the right information to the right person at the right time, according to the researchers, and they can improve quality of care, reduce errors and costs, and ease cognitive burdens of health care providers.
The researchers identified two reasons why back-and-forth dialogue was an important part of integrating the tools.
1. Because end users best understand the data.
Data used to train machine learning tools must be representative of the target population. This requires a lot of training data, and curating data is an ongoing project.
The Low Bed Tool, which was designed to predict bed availability in intensive care units and other units, required a large amount of data from multiple sources. Developers had to work with end users to identify and reconcile differences between groups of data, such as which measure of bed availability the tool was using, and to unify reporting methods. Over time, the developers also identified a single key user for this tool and took her needs and perspectives into account, adapting the technology to fit her needs and workflow.
This back-and-forth process helped making the tool more accurate and useful for clinicians, and managers reported improved capacity and shorter wait times once the tool was adopted.
2. Because machine learning models are opaque.
Machine learning models train themselves and use patterns and inference to make recommendations. Users can be reluctant to use the tools if they don’t trust or understand how they work, or if there isn’t a review of what the machine learning tool predicts compared to what experienced employees predict.
For the Readmission Risk Tool, which identified patients most at risk for returning to the hospital soon after discharge, care managers noticed that there were some situations in which the model did not identify patients as a high risk, but managers did. The machine learning tool wasn’t capturing important information that care managers used to predict whether patients were at risk of readmission, such as whether the patient lived alone or did not have a car. This information was unstructured data — free text in the “notes” section of the electronic record — and couldn’t be added to the model. So developers added a dashboard that highlighted not just the readmission risk score, but also gave care managers additional the information alongside the risk score to help with their decision making.
“Machine learning tools can be a powerful way for organizations to improve decision making, efficiency, and innovation. But, in order for them to work, managers need to give developers and end users the time and space for collaborative iteration,” Kellogg said. “They’ll need to engage in a back-and-forth process with one another to build, evaluate, and refine the tools, in order for the tools to be useful in practice.”