Topics
More on Artificial Intelligence

Biden's executive order on AI gets provider and payer commitment

Some 28 provider, payer organizations have made a voluntary commitment to move towards the safe, secure and trustworthy purchasing and use of AI.

Susan Morse, Executive Editor

Photo: Westend61/Getty Images

Twenty-eight provider and payer organizations have made voluntary commitments to move toward the safe, secure and trustworthy purchasing and use of AI technology, as outlined in President Biden's Executive Order issued in October, according to the Department of Health and Human Services.

The 28 providers and payers are: Allina Health, Bassett Healthcare Network, Boston Children's Hospital, Curai Health, CVS Health, Devoted Health, Duke Health, Emory Healthcare, Endeavor Health, Fairview Health Systems, Geisinger, Hackensack Meridian, HealthFirst (Florida), Houston Methodist, John Muir Health, Keck Medicine, Main Line Health, Mass General Brigham, Medical University of South Carolina Health, Oscar, OSF HealthCare, Premera Blue Cross, Rush University System for Health, Sanford Health, Tufts Medicine, UC San Diego Health, UC Davis Health and WellSpan Health.

According to HHS, these companies are committing to: 

  • Vigorously developing AI solutions to optimize healthcare delivery and payment by advancing health equity, expanding access, making healthcare more affordable, improving outcomes through more coordinated care, improving patient experience, and reducing clinician burnout.
  • Working with their peers and partners to ensure outcomes are aligned with fair, appropriate, valid, effective, and safe (FAVES) AI principles, as established and referenced in HHS' Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) rule.
  • Deploying trust mechanisms that inform users if content is largely AI-generated and not reviewed or edited by a human.
  • Adhering to a risk management framework that includes comprehensive tracking of applications powered by frontier models and an accounting for potential harms and steps to mitigate them.
  • Researching, investigating, and developing AI swiftly but responsibly.

WHY THIS MATTERS

President Biden's Executive Order on AI outlined dozens of actions, including many for which the U.S. Department of Health and Human Services is responsible, for ensuring what the administration called the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence."

This voluntary commitment helps ensure that AI is deployed safely and responsibly in healthcare, the administration said.

Previous voluntary commitments by private companies had mostly focused on responsible AI through technology developers on the "supply-side" of the equation. This included the use of AI in foundation models, medical devices, software applications and electronic health records.

The commitments announced today are from entities on the "demand-side," specifically, healthcare providers and payers who develop, purchase, and implement AI-enabled technology for their own use in health care activities. President Biden has previously secured commitments from companies to help advance AI-related goals. In July 2023, 15 companies responsible for many of the most cutting-edge AI models committed to a series of actions designed to promote safety, security and trust – three principles fundamental to the future of AI.

THE LARGER TREND

President Biden's executive order on AI directs HHS to have a mechanism in place to collect reports of "harms or unsafe healthcare practices," act to remedy them, establish a safety program and to plan for "safe, secure and trustworthy" artificial intelligence.

Twitter: @SusanJMorse
Email the writer: SMorse@himss.org