MIT Sloan Health Systems Initiative

The Quest for Responsible AI in Healthcare: Key Insights from The Coalition for Health AI (CHAI)

On December 2, 2024, HSI, in collaboration with the Martin Trust Center, presented the next seminar in their Sparking the Data Revolution in Healthcare series. Speaker Brian Anderson, MD, CEO of CHAI (Coalition for Health and AI) cast a spotlight on the urgent need for responsible artificial intelligence development and implementation in the healthcare sector. Anderson’s wide-ranging talk touched on crucial aspects such as data diversity, the role of quality assurance labs, the importance of transparency through model cards, and the imperative of training healthcare professionals.

The Evolving AI Landscape: From Predictive to Generative
The seminar underscored the distinction between traditional (predictive) AI and the burgeoning field of generative AI. Traditional AI applications encompass computer vision for cancer detection and risk classification for conditions like sepsis. Generative AI, on the other hand, is making inroads into areas such as back-office optimization, AI scribes for documentation, and streamlining payer-provider engagement. With more than 800 software medical devices already approved by the FDA since the 1990s, the integration of AI in healthcare is not new, but it is certainly gaining momentum with the rise of generative AI. This evolution brings both opportunities and challenges that the industry must address.

Addressing the Equity Gap in AI Deployment
A central theme of the discussion revolved around the equitable deployment of AI, encapsulated by Ed Young's quote: "Technological solutions tend to rise into society's penthouses, while epidemics seep into its cracks.” Anderson emphasized the risk of AI benefits being concentrated in well-resourced health systems, potentially leaving underserved populations behind. The issue of data bias was also raised, highlighting the current skew toward urban, educated, and Caucasian populations in AI training datasets. Overcoming this bias requires activating diverse datasets from under-resourced health systems, a task made difficult by the lack of data teams and specialized expertise in those areas.

CHAI's Mission: Fostering Responsible AI Practices
The Coalition for Health AI (CHAI) is at the forefront of establishing consensus on responsible AI practices throughout the AI model lifecycle, including development, deployment, and governance/monitoring. Originating during the pandemic to tackle data analytics for monoclonal antibodies and vaccines, CHAI now boasts nearly 4,000 member organizations. These include big tech companies, startups (accounting for 48% of the membership), health systems, payers, life science companies, and patient advocates. CHAI's key focus areas are:

  • Developing technical best practices aligned with the principles of responsible AI (fair, accountable, valid, effective, safe—"FAVES").
  • Establishing a network of Quality Assurance (QA) Labs.
  • Promoting transparency through model cards and a registry platform.
  • Training healthcare professionals on the responsible use of AI tools.

To achieve these goals, CHAI has launched several working groups focused on responsible AI principles, generating open-source technical documents.

The Role of Quality Assurance (QA) Labs
Consumers have come to expect quality assurance and ratings for products in some industries, such as automotive. Drawing on that inspiration, CHAI is championing the creation of Quality Assurance (QA) labs to provide independent testing and validation of AI models. CHAI is in the process of certifying a federated network of more than 32 health systems across the nation to serve as QA labs. The geographic distribution of these labs is intended to ensure representative testing of AI models across diverse patient populations. The certification process emphasizes trustworthiness, competency, and the absence of conflicts of interest, with key criteria including independence from commercial entanglement, skilled personnel, and access to high-quality, regulatory-grade data.

Championing Transparency Through Model Cards
Acknowledging the widespread public distrust of AI, Anderson emphasized transparency as a crucial solution. CHAI is actively working to standardize model cards, akin to nutrition labels, to offer essential information about AI models to potential customers such as health systems and payers. These model cards are set to become a requirement for EHR vendors deploying AI tools, thanks to the HTI-1 rule.

Empowering Healthcare Professionals Through Training
The seminar highlighted the critical need to equip healthcare professionals with the knowledge and skills to use AI tools responsibly and effectively. Training programs should emphasize responsible AI principles and foster the ability to critically evaluate AI tools. On their website, CHAI provides a responsible AI guide.

Navigating the Regulatory Landscape
CHAI is adopting a private sector-led approach while maintaining open channels for collaboration with federal and state officials. This collaborative approach recognizes that ensuring safe and effective AI is a bipartisan concern. Recent guidance from the Office of Civil Rights (OCR) has clarified that healthcare providers bear responsibility for discriminatory outcomes resulting from the use of biased AI tools. As of late 2024, the FDA, with its long history of regulating software as a medical device, had been adapting to the dynamic nature of AI models through predetermined change control plans.

Addressing Key Challenges and Questions
Despite the progress being made, the seminar identified several remaining challenges:

  • Lack of Consensus on Metrics: The absence of widespread agreement on how to measure bias or performance for generative AI models.
  • Evaluation Science: The need for further research and development of evaluation science to rigorously assess the quality and impact of AI models.
  • Transparency vs. IP Protection: Striking a balance between the need for transparency and the protection of AI vendors' intellectual property.
  • Aligning with values: As AI agents become increasingly integrated into daily life, it will be crucial to ensure that they align with individuals' values and contribute to "human flourishing."

The seminar underscored the multifaceted challenges and opportunities in the realm of responsible AI in healthcare. By fostering collaboration, promoting transparency, and prioritizing ethical considerations, stakeholders can pave the way for AI solutions that benefit all members of society.