recent

New database details AI risks

How to redesign work for the age of AI

MIT Sloan reading list: 7 books from 2024

Credit: Yurchanka Siarhei / Shutterstock

Ideas Made to Matter

Artificial Intelligence

A framework for assessing AI risk

By

While it’s easy to get swept up in the many promises of artificial intelligence, there are mounting concerns about its safety and bias. Medical algorithms have exhibited bias that impacts disadvantaged populations, as have AI-enabled recruiting tools. AI facial recognition has led to wrongful arrests, and generative AI has the tendency to “hallucinate,” or make things up.

These scenarios highlight the importance of taking a proactive approach to AI governance to set the stage for positive results. While some technologies can be implemented and revisited periodically, responsible AI requires a more hands-on governance approach, according to Dominique Shelton Leipzig, a partner at Mayer Brown, who leads the law firm’s global data innovation practice. AI governance should start at the earliest stages of development and be reinforced with constant tending and evaluation.

“The promise of AI is super amazing, but in order to get there, there’s going to need to be some hovering,” Shelton Leipzig said at the recent EmTech MIT conference, hosted by MIT Technology Review. “The adoption of AI governance early ensures you can catch things like AI not identifying dark skin or AI ushering in cyberattacks. In that way, you protect your brand and have the opportunity to establish trust with your customers, employees, and business partners.” 

Shelton Leipzig, the author of the new book “Trust: Responsible AI, Innovation, Privacy and Data Leadership,” outlined a framework for assessing and addressing AI risk that is based on early drafts of proposed legislation around the world.

Red light, yellow light, and green light guideposts

Governments in 78 countries across six continents have worked with research scientists and others to develop draft legislation aimed at making at AI safe, though the work is still evolving, Shelton Leipzig said. Nevertheless, as companies move ahead with AI initiatives and proper governance, they need to categorize the risk level for their intended AI use cases. She proposed a red light, yellow light, and green light framework, based on proposed legislation, to help companies streamline AI governance and decision-making.

Red-light use cases (prohibited). Legal frameworks have identified 15 cases in which AI should be prohibited. For example, AI should not play a role in surveillance related to the exercise of democratic values like voting or in continuous surveillance in public spaces. Remote biometric monitoring is also frowned upon, as is social scoring, whereby social media activity could be used as part of decision-making for a loan or insurance, for example. “[Governments] don’t want private companies doing this because there’s a risk of too much harm,” Shelton Leipzig said.

Green-light use cases (low risk). These cases, such as AI’s use in chatbots, general customer service, product recommendations, or video games, are generally considered fair game and at low risk for bias or other safety factors, Shelton Leipzig said. Many of these examples have been used safely for several years.

Yellow-light use cases (high risk). Most types of AI fall in this category. These cases are where most companies are at risk and governance is put to the test. Shelton Leipzig said there are nearly 140 examples of yellow-light AI cases, including using AI in HR applications, family planning and care, surveillance, democracy, and manufacturing. Evaluating creditworthiness, managing investment portfolios, or underwriting financial instruments are just a few examples of high-risk use of AI for financial applications.

A lightbulb with the abbreviation "Ai" on it seems to be flying like a rocket ship

AI Executive Academy

In person at MIT Sloan

How to navigate high-risk AI 

Once a use case is determined to be in the high-risk/yellow-light category, companies should take the following precautions, which are drawn from the European Union Artificial Intelligence Act and the technical companion of the White House’s “Blueprint for an AI Bill of Rights”:

Ensure that there is high-quality, accurate data. Data must be accurate, organizations must have the rights to use it, and the material needs to be relevant.

Embrace continuous testing. Organizations need to commit to pre- and post-deployment continuous testing for algorithmic bias and accuracy to ensure safety, prevent privacy or cybersecurity breaches, and ensure compliance. “AI needs to be watched because it can drift or hallucinate,” Shelton Leipzig said. “You don’t want to wait for a headline to appear and your company is besmirched by AI efforts. We can get ahead of this by simply having continuous testing, monitoring, and auditing.”

Allow for human oversight. If earlier steps reveal deviations from expectations, enlist humans to correct the model and mitigate risk.

Create fail-safes. The company needs to make it clear that an AI use case will be halted if deviations can’t be effectively corrected.

Even though legislation for AI safeguards is still in flux, Shelton Leipzig cautioned companies not to hold off on adopting these critical governance steps. AI governance is a team sport, and the right stakeholders and team members need to be involved, and the board of directors, general counsel, and CEO must be kept informed at every step.

“Rather than wait until the laws are final, which will probably be a couple of years from now, there’s no need to build AI without these guardrails,” Shelton Leipzig said. They enable companies to have visibility into what’s going on and ensure that their AI efforts live up to expectations without fines, brand damage, or worse, she added.

Read next: generative AI research from MIT Sloan

For more info Sara Brown Senior News Editor and Writer