AI In Healthcare Resource Center | McDermott Will & Emery

AI IN HEALTHCARE

LAW CENTER

Artificial Intelligence (AI) is not new to healthcare, but emerging generative AI tools present a host of novel opportunities and challenges for the healthcare sector. For years, our global, cross-practice team and health law leaders have guided innovative companies to use AI and machine learning (ML) in the delivery and support of patient care and the provision of cutting-edge tools in the health and life sciences industries. We continue to closely monitor the state of AI in healthcare to help you maximize the value of data and AI with an eye toward compliance in this rapidly evolving regulatory landscape.

We’ve curated links to key AI-related resources from legislative and executive bodies, government agencies and industry stakeholders around the world in one convenient place so you can stay current on these important issues and actively shape the AI policy landscape. This resource center will be updated with the latest developments, as well as insights and analyses from our team.

Subscribe now to receive updates, and please get in touch with us to discuss how your organization is developing or deploying AI/ML solutions.

MORE FROM McDERMOTT

OPPORTUNITIES FOR PUBLIC COMMENT

    • Summary: On February 6, 2025, the National Science Foundation published a request for information (RFI) on the development of an artificial intelligence (AI) plan, as directed by President Trump’s executive order entitled, “Removing Barriers to American Leadership in AI.” The plan will be required to define the priority policy actions needed to sustain and enhance America’s AI dominance and ensure that unnecessary requirements do not hamper private sector innovation. The RFI seeks input on priority actions to be included in the plan. Reponses can address any relevant AI policy topic and are encouraged to include concrete policy options.
    • Summary: The Food and Drug Administration (FDA) issued draft guidance providing lifecycle management and marketing submissions recommendations for AI-enabled devices, including software functions, consistent with its long promoted total product life cycle (TPLC) approach to medical device oversight. The draft guidance recommends the following information be included in submissions for AI-enabled devices:
      • Description of the device and its user interface to support FDA’s understanding of intended use, expected clinical workflow, use environment, features and design;
      • A comprehensive risk assessment to ensure safety and efficacy;
      • A clear explanation of data management practices, including training sets and methodology for mitigating AI bias;
      • Information regarding the model design, including its biases and limitations;
      • A performance validation to ensure predictable and reliable function;
      • A post-market performance monitoring plan to identify and respond to changes;
      • Elements and explanations of cybersecurity risk management and testing; and
      • A public submission summary to enable transparency to the public via a “Model Card.”

    [For more information regarding this draft Guidance, McDermott Will & Emery has published the following On The Subject.]

    • Summary: The Food and Drug Administration (FDA) issued draft guidance providing recommendations to sponsors on the use of AI to produce information or data intended to support applications or submissions for drugs and biological products. The draft guidance outlines a high-level seven-step risk-based process for sponsors to establish and assess the credibility of an AI model’s output. The draft guidance does not provide extensive details on each step, rather stating that the agency envisions interactive feedback and communication with interested parties would shape the framework. Accordingly, FDA strongly encourages sponsors to engage with the agency early in the drug development process to set expectations and identify potential challenges.

GOVERNMENT RESOURCES

United States

  • November 8, 2023: FDA, Health Canada and MHRA Announce Guiding Principles for Predetermined Change Control Plans for Machine-Learning Enabled Medical Devices
    • Summary: FDA, Health Canada, and the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) announced the five jointly identified guiding principles for predetermined change control plans (PCCPs) for machine-learning enabled devices. The term PCCP describes a plan, proposed by a manufacturer, that specifies certain planned modifications to a device, the protocol for implementing and controlling such modifications, and the assessment of impacts from such modifications. The document is intended to outline foundational considerations that highlight the characteristics of robust PCCPs and encourage international harmonization. The five guiding principles are: (1) focused and bounded; (2) risk-based; (3) evidence-based; (4) transparent; and (5) total product lifecycle perspective. The five guiding principles for PCCPs draw upon the 10 ten Good Machine Learning Practice (GMLP) guiding principles developed by FDA, Health Canada, and MHRA in 2021. Stakeholders can provide feedback through the FDA public docket (FDA-2019-N-1185).
    • Digital Health Center of Excellence
      • Summary: The Digital Health Center of Excellence (DHCoE) was created by FDA to empower stakeholders to advance healthcare by fostering responsible and high-quality digital health innovation. DHCoE is responsible for aligning and coordinating digital health work across the FDA. This site provides links to a variety of resources, draft guidance and events related to AI/ML.
  • Artificial Intelligence at CMS
    • Summary: This site offers a starting point for stakeholders interested in any aspect of AI at CMS. It provides links to foundational governance documents on AI, noting that those who wish to engage in AI-related activities, whether as a CMS employee, partner or vendor, should be aware of federal policies regarding the application of AI. The site also provides details on AI programs and initiatives at CMS.
  • Blog Series: Artificial Intelligence & Machine Learning
    • Summary: This blog series explores current and potential uses of AI, predictive models and machine learning algorithms in healthcare, and the role that ONC can play in shaping their development and use. Topics covered include increasing transparency and trustworthiness of AI in healthcare, minimizing risks and rewards of machine learning, and risks posed to patients by these technologies.
  • NTIA.gov
    • Summary: This site serves as a starting point for stakeholders interested in NTIA policymaking activities, which are principally focused on helping to develop policies necessary to verify that AI systems work as they claim.
  • Trustworthy & Responsible AI Resource Center
    • Summary: This platform supports and operationalizes the NIST AI Risk Management Framework (AI RMF) and accompanying playbook. It provides a repository of foundational content, technical documents, and AI toolkits, including standards, measurement methods and metrics, and data sets. Over time, it will provide an interactive space that enables stakeholders to share AI RMF case studies and profiles, educational materials and technical guidance related to AI risk management.
  • January 26, 2023 | NIST AI Risk Management Framework
    • Summary: On January 26, 2023, NIST released its “Artificial Intelligence Risk Management Framework (AI RMF)”. The framework is designed to equip stakeholders with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment and use of AI systems over time. The AI RMF is intended to be a living document that is to be reviewed frequently and updated as necessary. The framework has an accompanying playbook that provides suggested actions for achieving the outcomes laid out in the framework.

HEALTHCARE-SPECIFIC POLICY AND REGULATORY INITIATIVES

  • January 7, 2025 | Draft Guidance Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations
    • Summary: The Food and Drug Administration (FDA) issued draft guidance providing lifecycle management and marketing submissions recommendations for AI-enabled devices, including software functions, consistent with its long promoted total product life cycle (TPLC) approach to medical device oversight. The draft guidance recommends the following information be included in submissions for AI-enabled devices:
      • Description of the device and its user interface to support FDA’s understanding of intended use, expected clinical workflow, use environment, features and design;
      • A comprehensive risk assessment to ensure safety and efficacy;
      • A clear explanation of data management practices, including training sets and methodology for mitigating AI bias;
      • Information regarding the model design, including its biases and limitations;
      • A performance validation to ensure predictable and reliable function;
      • A post-market performance monitoring plan to identify and respond to changes;
      • Elements and explanations of cybersecurity risk management and testing; and
      • A public submission summary to enable transparency to the public via a “Model Card.”
    • Comments are due by April 7, 2025. [For more information regarding this draft Guidance, McDermott Will & Emery has published the following On The Subject.]
  • January 7, 2025 | Draft Guidance Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products
    • Summary: The Food and Drug Administration (FDA) issued draft guidance providing recommendations to sponsors on the use of AI to produce information or data intended to support applications or submissions for drugs and biological products. The draft guidance outlines a high-level seven-step risk-based process for sponsors to establish and assess the credibility of an AI model’s output. The draft guidance does not provide extensive details on each step, rather stating that the agency envisions interactive feedback and communication with interested parties would shape the framework. Accordingly, FDA strongly encourages sponsors to engage with the agency early in the drug development process to set expectations and identify potential challenges. Comments are due by April 7, 2025.
  • May 10, 2023 | CDER/CBER/CDRH Paper on Using Artificial Intelligence and Machine Learning (AI/ML) in the Development of Drugs and Biologics
    • Summary: This discussion paper was released to help inform potential future rulemaking on the use of AI/ML in drug development. The paper describes current and potential uses of AI/ML in drug discovery, clinical and non-clinical research, postmarket surveillance and advanced pharmaceutical manufacturing, and raises several questions for stakeholder input. The Federal Register notice can be found here. Comments were due August 9, 2023.
  • April 3, 2023 | Draft Guidance Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions
    • Summary: This draft guidance includes recommendations on the information to be included in a Predetermined Change Control Plan to be submitted in marketing submissions for AI and ML-enabled devices. The purpose of a Predetermined Change Control Plan is to account for certain planned or expected device modifications that would otherwise normally require a premarket approval supplement, new de novo submission or new 510(k) under applicable regulations. Comments were due July 3, 2023. See our summary of the draft guidance here.
  • December 13, 2023 | ONC Finalizes HTI-1 Rule
    • Summary: On December 13, 2023, the US Department of Health and Human Services (HHS) Office of the National Coordinator for Health Information Technology (ONC) issued the Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) final rule to update ONC Health IT Certification Program requirements and amend the information blocking regulations that ONC issued under the 21st Century Cures Act (Cures Act). The HTI-1 final rule substantially finalizes policies ONC set forth in the HTI-1 proposed rule but does not finalize the controversial proposal on patient-requested restrictions for certain data uses and disclosures (sometimes referred to as data segmentation). Read our +Insight here.
  • April 11, 2023 | Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Proposed Rule
    • Summary: This proposed rule includes proposals to “promote greater trust in the predictive decision support interventions (DSIs) used in healthcare to…enable users to determine whether predictive DSI is fair, appropriate, valid, effective and safe.” The proposed transparency, documentation and risk management requirements impact developers that participate in the ONC Health IT Certification Program and those that create predictive DSIs that are enabled by or interface with certified Health IT Modules. The proposed rule was published in the Federal Register on April 18, 2023. Comments were due on June 20, 2023.
  • January 2022 | AI Showcase: Seizing the Opportunities and Managing the Risks of Use of AI in Health IT
    • Summary: This showcase spotlights how federal agencies and industry partners are championing the design, development and deployment of responsible, trustworthy AI in health IT. Speakers included representatives from numerous federal agencies, Congress, the American Medical Association, academic health and research centers, and industry. The agenda, presentation slides and event recording are available to the public.
  • January 18, 2024 | WHO Releases AI Ethics and Governance Guidance for LMMs for Health
    • Summary: WHO released new guidance on the ethics and governance of LLMs used in healthcare. As LLMs are starting to be used for more specific health-related purposes, WHO outlined a variety of risks, such as inaccurate or biased information, models trained on poor quality or biased data, privacy risks, inaccessibility and unaffordability, automation bias, and cybersecurity risks, among others. Accordingly, the guidance outlines more than 40 recommendations for consideration by governments, technology companies and healthcare providers to promote the appropriate use of LLMs to protect population health.
  • October 19, 2023 | WHO Outlines Considerations for Regulation of AI for Health
    • Summary: WHO released a new publication listing key regulatory considerations for the use of AI for health in response to the growing need to responsibly manage the proliferating AI technologies in healthcare. It highlights 18 considerations across the following six categories: (1) documentation and transparency; (2) risk management and AI systems development lifecycle approach; (3) intended use and analytical and clinical validation; 4) data quality; (5) privacy and data protection; and (6) engagement and collaboration. The publication emphasizes the importance of establishing AI systems’ safety and effectiveness, rapidly making appropriate systems available to those who need them and fostering dialogue among stakeholders. The publication is intended to be a listing of key regulatory considerations and a resource that can be considered by all relevant stakeholders in medical devices ecosystems, including developers who are exploring and developing AI systems, regulators who might be in the process of identifying approaches to manage and facilitate AI systems, manufacturers who design and develop AI-embedded medical devices, health practitioners who deploy and use such medical devices and AI systems, and others working in these areas. The full publication can be accessed here.
  • May 16, 2023 | WHO Calls for Safe and Ethical AI for Health
    • Summary: The WHO released a statement calling for caution to be exercised in using AI-generated large language model tools such as ChatGPT for health-related purposes. The WHO expressed concern that the caution that would normally be exercised for any new technology is not being exercised consistently with large language model tools, including adherence to key values of transparency, inclusion, public engagement, expert supervision and rigorous evaluation, and enumerated several areas of concern with the use of such technology. The WHO proposed that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine health care and medicine, whether by individuals, care providers or health system administrators and policymakers.
  • June 28, 2021 | Report: Ethics and Governance of Artificial Intelligence for Health
    • Summary: The report identifies the ethical challenges and risks with the use of AI in healthcare, as well as six consensus principles to ensure AI works to the public benefit of all countries. The report is the product of input from experts in ethics, digital technology, law, human rights and health ministries. The WHO report includes a set of recommendations to governments and developers for oversight of AI in the delivery of healthcare, seeking to hold all stakeholders – in the public and private sectors – accountable and responsive to the healthcare workers who will rely on AI and the communities and individuals whose health will be affected by its use.

INDUSTRY AGNOSTIC POLICY INITIATIVES

United States

    • Summary: On January 13, 2025, California Attorney General (AG) Rob Bonta issued two legal advisories: one for businesses generally (General Advisory) and one specific to healthcare entities (Health Advisory). These advisories identify existing California laws that already apply to artificial intelligence (AI). The advisories demonstrate the AG’s willingness to push the limits of applying existing law to AI and emphasize the need for businesses to follow several key principles:
      • Use AI responsibly, ethically, and safely.
      • Understand how AI systems are trained, what data they use, how they generate outputs, and the risks to individuals, the environment, competition, and the public.
      • Be transparent about when AI is used and when data is used to train AI.
      • Test, validate, and audit AI systems.
    • Read our On the Subject for the full breakdown of these advisories.
    • Summary: The Trump White House issued an Executive Order (EO), “Removing Barriers to American Leadership in Artificial Intelligence,” revoking former President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, issued in 2023. The AI Action Plan, as directed by the EO, focuses on several key points:
      • Sustaining and Enhancing AI Dominance: The plan aims to maintain and boost the United States’ leadership in AI to promote human flourishing, economic competitiveness, and national security.
      • Eliminating Ideological Bias: It emphasizes the need for AI systems to be free from ideological bias and engineered social agendas
      • Revising Policies: Departments and agencies are instructed to revise or rescind policies, directives, regulations, and orders that hinder AI innovation.
      • Developing an AI Action Plan: Within 180 days, an AI Action Plan will be developed to outline specific actions and strategies to achieve these goals. The following parties were identified to lead development of the action plan: Assistant to the President for Science and Technology (APST), the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs (APNSA), in coordination with the Assistant to the President for Economic Policy, the Assistant to the President for Domestic Policy, the Director of the Office of Management and Budget (OMB Director), and the heads of such executive departments and agencies as the APST and APNSA deem relevant.
    • Summary: Following President Biden’s Executive Order (EO) on the Safe, Secure and Trustworthy Development of AI, the Department of Commerce released new guidance and software to help improve the safety, security and trustworthiness of AI systems, including the following:
      • The Department’s National Institute of Standards and Technology (NIST) released two new items:
          1. NIST’s AI Safety Institute has released the initial public draft of its guidelines on Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1), which outlines voluntary best practices for how foundation model developers can protect their systems from being misused to cause deliberate harm to individuals, public safety and national security. The guidelines offer seven key approaches for mitigating risks that models will be misused, along with recommendations for how to implement them and how to be transparent about their implementation. Comments are being accepted until Sept. 9, 2024, at 11:59 p.m to NISTAI800-1@nist.gov.
          2. A testing platform designed to help AI system users and developers measure how certain types of attacks can degrade the performance of an AI system. The open-source software is available for free download.
      • NIST also released three finalized guidance documents that were first released in April for public comment:
          1. The AI RMF Generative AI Profile (NIST AI 600-1), which can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities. This is intended to be a companion resource for users of NIST’s AI RMF.
          2. The Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST Special Publication (SP) 800-218A), designed to be used alongside the Secure Software Development Framework (SP 800-218) to address the training and use of AI systems.
          3. A Plan for Global Engagement on AI Standards (NIST AI 100-5), designed to drive the worldwide development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing.
      • The Department’s U.S. Patent and Trademark Office (USPTO) issued a guidance update on patent subject matter eligibility to address innovation in critical and emerging technologies, including AI.

INDUSTRY COMMENTARY

    • Summary: The Coalition for Health AI, an alliance of major health systems and tech companies, has released a blueprint to build trust in artificial intelligence’s use in healthcare. The blueprint focuses on framing risks, measuring impacts, allocating risk resources and strong governance.
    • Summary: The Connected Health Initiative’s Health AI Task Force released recommended principles to guide policymakers in taking action on AI. The principles cover areas such as research, quality assurance and oversight, access and affordability, bias, education, collaboration and interoperability, workforce issues, ethics, and privacy and security.

PAST EVENTS

The first installment of McDermott’s webinar series exploring tech trends impacting healthcare addresses how healthcare organizations can overcome uncertainty when navigating AI innovation and how best to implement and govern AI while remaining sensitive to associated risk management. Moving beyond general calls for AI governance, the panel provides practical steps organizations can take to assess AI tech opportunities, implement ongoing tech evaluation and quality control programs, build integrated compliance and engineering teams, and begin building a scalable, built-for-growth AI compliance framework.

During this webinar, McDermott and experts from McKinsey & Company explored the latest healthcare AI developments impacting investors and portfolio companies. We discussed a range of topics on AI, including the regulatory landscape and policy outlook, how investors can think about the next phase of investing in AI, and working through legal pain points. McKinsey speakers highlighted AI applications, use cases, and trends in the healthcare industry. The broader discussion also included considerations for AI governance and data rights, as well as value creation and value preservation in AI from a risk management standpoint.

2024 is shaking up to be the year that many hospitals and health systems are turning their artificial intelligence (AI) planning into action, making it a pivotal time to establish effective AI governance. During this webinar, we discuss considerations for hospitals and health systems implementing AI tools, spanning governance, research, data use and employment.

Healthcare policy leaders from McDermott+Consulting share their insights on the state of generative AI oversight by Congress and federal agencies and how companies can actively participate in the burgeoning AI policymaking process and the development of regulations governing AI in healthcare. They also provide tips on securing Medicare coverage for such innovative offerings.

Learn how healthcare providers and health services companies can seize the opportunities presented by generative AI and large language models while navigating the industry’s fragmented and complex regulatory landscape. We’ll explore which regulations from the US, EU and UK you should be watching, types of liability raised by health AI and offer practical steps your organization can take to develop, implement and acquire AI solutions.

As AI’s potential role in healthcare expands at an unprecedented rate, it is increasingly clear that no one-size-fits-all policy can address the complexities of this dynamic space. During this webinar, digital health strategists and algorithmic auditors from O’Neil Risk Consulting & Algorithmic Auditing (ORCAA), an algorithmic risks consulting company for AI and automated systems, unravel the challenges and provide the tools you need to conceptualize and manage these risks effectively.

AI IN HEALTHCARE RESOURCE CENTER ARCHIVES

GET IN TOUCH