recent

The rise of the union-curious worker, and how to win them over

IKEA CEO: 3 ways to gain competitive advantage with sustainability

Consumers prefer early entrants in new markets, but 2nd movers can still win

Credit: Ole_CNX / iStock

Ideas Made to Matter

Artificial Intelligence

Third-party AI tools pose increasing risks for organizations

By

As artificial intelligence becomes more powerful and widespread, the accompanying risks are also more real and more abundant. Errors or misuse could lead to reputational damage and loss of customer trust, financial losses, regulatory penalties, and even litigation.

To guard against these risks, companies are tasked with developing a responsible AI framework to ensure that AI systems are developed legally and ethically in service of the good of individuals and society.  

A particular concern is third-party AI — tools or algorithms designed by another company that an organization buys, licenses, or accesses, according to a new research report by MIT Sloan Management Review and Boston Consulting Group.

The report, based on an executive survey of more than 1,240 respondents representing companies in 59 industries and 87 countries, revealed that 78% of organizations use third-party AI tools, and more than half use third-party tools exclusively. This is a concern, because the report also found that more than half (55%) of all AI failures come from third-party tools. Company leadership might not even know about all of the AI tools that are being used throughout their organizations, a phenomenon known as “shadow AI.”

The report outlined five ways companies can reduce risk from AI, particularly from third-party tools:

1. Move quickly to expand responsible AI programs. The gap is widening between companies that lead in implementing responsible AI programs and those that lag behind. To keep up, organizations should broaden the scale and scope of responsible AI programs and make sure they are implemented throughout the organization instead of on an ad hoc basis.

2. Properly evaluate third-party tools. The use of third-party tools is likely to grow. While there isn’t an easy way to mitigate the risks posed by these tools, organizations should continue to evaluate their use of third-party AI using a variety of methods, including evaluating a vendor’s responsible AI practices and adherence to regulatory requirements. The more evaluation methods an organization uses, the more effective their efforts are likely to be. The researchers found that organizations that use seven different methods to evaluate third-party tools are more than twice as likely to uncover AI failures compared with those that use only three.

Related Articles

Report details the business benefits of ‘responsible AI’
3 ways to center humans in artificial intelligence efforts
The legal issues presented by generative AI

3. Prepare for regulation. Organizations in highly regulated industries appear to have better practices around risk management, which could contribute to better responsible AI outcomes and greater business benefits. More regulations could be on the way too, as new rules are  drafted and begin to take effect at local and national levels. According to the research report, all organizations can benefit from the structured risk management approach of a responsible AI program, particularly when it comes to using or integrating third-party AI tools.

4. Engage CEOs in responsible AI efforts. CEO engagement can boost the benefits of responsible AI programs and thus help mitigate the risks of AI use. The research found that when CEOs play an active role in responsible AI through hiring, target setting, or product-level discussions, the company sees 58% more business benefits than organizations with less-involved CEOs.

5. Double down and invest in responsible AI. Most of all, now is not the time to cut back on resources or teams devoted to ethical or responsible AI, or even to just sustain these efforts at previous levels. AI’s adoption has soared, and so have the risks associated with the technology.

“In this climate, not investing in [responsible AI] is tantamount to falling behind and exposing your organization to material risk,” the authors write.

Read the 2023 Responsible AI report 

For more info Sara Brown Senior News Editor and Writer