How to tap AI’s potential while avoiding its pitfalls

Banks’ climate pledges don’t add up to change, research finds

The impact of misleading headlines on Facebook

Credit: Carolyn Geason-Beissel/MIT SMR | Getty Images

Ideas Made to Matter


Use data to your advantage in 2024: New ideas from MIT Sloan Management Review


New insights from MIT Sloan Management Review offer guidance for using data in the coming year. Read on to learn how assessing employer reviews can influence positive change, using shared terminology can drive informed conversations about monetizing data, and creating roles and processes for evaluating data’s role in the organization can ensure that data is used in meaningful and ethical ways.

Assess employee reviews, and prioritize what drives satisfaction

Health care is facing a staffing crisis, with job dissatisfaction and burnout poised to leave the industry with a shortfall of 450,000 nurses in just two years. In response, many organizations have sought to address the issue with higher salaries, more on-the-job perks, and an emphasis on nursing’s mission-driven work.

Truly solving the nursing crisis, though, requires an understanding of what causes dissatisfaction and burnout in the first place. MIT Sloan senior lecturer  and CultureX co-founder Charles Sull analyzed more than 150,000 Glassdoor reviews written by nurses since the beginning of the pandemic. They found toxic culture and lack of support from leadership to be more predictive of dissatisfaction than compensation or workload. (To take a deep dive into the data, visit the Nursing Satisfaction Index, also published by MIT Sloan Management Review.)

The next step is using this data to identify areas of improvement and prioritize which actions to take. The authors recommend learning from nursing staffing agencies, which earned many more positive reviews than hospitals and health care systems. Here, the data showed that travel nurses employed by staffing agencies benefited from a “safe space” to provide feedback, quick responses to complaints, transparent decision-making, and flexibility in scheduling.

It’s imperative that organizations act consistently on these types of insights, they authors write. Leading health care organizations must identify problems within individual business units, test solutions using evidence-based methods, and implement them more broadly if they’re deemed a success.

Read: The real issues driving the nursing crisis

Improve conversations about monetizing data

For many business leaders, discussions about whether to generate value from data descend into a debate about terminology — and reach an impasse. In the new book “Data Is Everybody’s Business,” the MIT Center for Information Systems Research’s Barbara H. Wixom, Cynthia M. Beath, and Leslie Owens offer two frameworks for understanding data products and going beyond talking about monetizing data to actually doing it.

The first framework identifies three approaches to monetizing data:

  • Improve work tasks by cutting costs or increasing efficiency.

  • Create new, wraparound products or services for existing products.

  • Sell data assets that have tangible value to customers.

The second framework explains the three stages of creating value:

  • Put data into someone’s hands.

  • Offer insight.

  • Provide recommendations, which could include automatically completing tasks.

Key to the process, the authors argue, is periodically reviewing data monetization strategy in the context of these frameworks. If an organization focuses heavily on process improvement or data products, then it might be time to evolve further. On the other hand, if certain types of initiatives are blocked — such as, say, wraparound products that provide recommendations — then it’s worth exploring what needs to happen to enable these opportunities.

Read: How to have better strategy conversations about monetizing data

Bridge gaps between data science and operations

Efforts to apply data and analytics to business operations and decision-making often hit a roadblock. Line managers want predictability and control, while data scientists tend to provoke disruption. These conflicts aren’t trivial, and they threaten an organization’s ability to deploy useful data science models.

As a remedy, Thomas H. Davenport, of the MIT Initiative on the Digital Economy, and Data Quality Solutions president Thomas C. Redman propose a connector role to bridge gaps between data and business departments. To ensure that the data connector fulfills a strategic role and isn’t simply solving one tactical problem after another, organizations should approach this role in a disciplined way and follow three steps:

  • Define the project process and the people involved. Recognize that each phase of the project — framing the problem, preparing data, developing the data model, deploying the model, and so on — will require contributions from different parts of the business.

  • Evaluate the connector’s role. This often involves a mix of framing the problem in data science terms, “translating” between business and technical teams, ensuring data quality, and tracking progress.

  • Clarify the role for connectors. While this is straightforward for individual projects, clarity and context help data science teams define their responsibilities and establish their value for the entire organization.

Read: The rise of connector roles in data science

Ensure ethical and effective use of artificial intelligence

Though nearly 75% of senior executives believe that ethical guidelines for using AI are important, only 6% of organizations have developed them. In large part, this is because they don’t have enough AI projects, processes, or systems in place to gauge whether they’ve met their standards for AI ethics.

Related Articles

Don’t miss these 3 opportunities to improve work culture
How to make data ‘everybody’s business’

To help these organizations, Davenport and Randy Bean, an innovation fellow at Wavestone, traced the evolution of AI ethics at Unilever.

The company began its AI ethics work by establishing simple policies: A human should make any decision that will significantly impact an individual, a Unilever employee must be accountable for any decision the AI model generates, and so on.

Leaders quickly learned that policies alone weren’t enough and subsequently took two important steps. One was creating a compliance assessment process for each AI system, including those developed by external partners. The other was assessing effectiveness as well as ethics, given that using an ineffective AI model carries significant risk as well.

Now, AI products are rated for risk on a red, yellow, and green scale at three stages: initial triage, further analysis, and final mitigation. Based on this scale, a “red” product shouldn’t be deployed at all; a “yellow” product, meanwhile, comes with acceptable risk and can be used, but the business owner is responsible for the product and its outputs.

Read: AI ethics at Unilever — From policy to process

For more info Zach Church Editorial & Digital Media Director (617) 324-0804