Overview
On March 15, 2024, four of the US Food and Drug Administration’s (FDA) medical products centers released a joint paper, titled “Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together,” outlining how the FDA intends to approach artificial intelligence (AI) use in the medical product life cycle. The paper was issued by the FDA’s Center for Biologics Evaluation and Research (CBER), Center for Drug Evaluation and Research (CDER), Center for Devices and Radiological Health (CDRH) and the Office of Combination Products (OCP) (collectively, the Centers). In it, the Centers discuss plans for forthcoming FDA actions and guidance that the Centers believe balance safeguards for public health and promoting innovation.
Building on our previous coverage of the FDA’s draft predetermined change control plan for machine-learning (ML)-enabled device software functions and Good Machine Learning Practice Guiding Principles, this On The Subject highlights key FDA actions announced in the joint paper and shares insights on what the joint paper reveals about the Centers’ plans for regulating AI.
In Depth
In the joint paper, the Centers highlight the need for greater regulatory clarity and collaboration across the sector on standards, best practices and tools for evaluating AI used in medical products. The Centers point out that risk management and monitoring of AI technologies must occur throughout the product life cycle to be effective and acknowledge the importance of risk-based approaches to regulation.
The Centers hone in on four focus areas for the development and use of AI across the medical product life cycle:
1. Foster Collaboration To Safeguard Public Health
The Centers note the need to collaborate and coordinate with stakeholders to develop consistent regulatory approaches and standards for the use of AI in medical products. The Centers promise to work closely with international regulators and a wide range of stakeholders when developing standards for AI evaluation and monitoring. The Centers also seek to promote educational initiatives that support parties across the medical device life cycle who are trying to navigate the responsible use of AI in medical products.
2. Advance the Development of Regulatory Approaches That Support Innovation
The Centers promise to monitor trends and identify knowledge gaps with the goal of providing timely, clear guidance on the use of AI in medical products and to support efforts to develop regulatory science methodologies for evaluating AI systems. This includes methods for identifying and mitigating bias and ensuring AI algorithms remain robust in response to changing clinical inputs and conditions. The Centers also intend to issue additional guidance on AI-enabled medical products, including:
- Final guidance on marketing submission recommendations for predetermined change control plans for AI-enabled device software functions (the draft guidance was issued on April 3, 2023)
- Draft guidance on life cycle management considerations and premarket submission recommendations for AI-enabled device software functions
- Draft guidance on considerations for the use of AI to support regulatory decision-making for drugs and biological products.
3. Promote the Development of Standards, Guidelines, Best Practices and Tools for the Medical Product Life Cycle
The Centers intend to build off the previously issued Good Machine Learning Practice Guiding Principles to further refine and develop standards, best practices and tools for medical products using AI in their development life cycle. This will include refining metrics for AI transparency, safety and cybersecurity. Developing best practices for ensuring the data used to train AI systems is fit for use (e.g., representative of the target population) is another focus for the Centers. Finally, the Centers intend to identify best practices and develop frameworks for long-term, real-word monitoring and quality assurance of AI-enabled medical products and tools.
4. Support Research Related to the Evaluation and Monitoring of AI Performance
Finally, the Centers will support demonstration projects related to AI monitoring and evaluation. The Centers highlighted projects that identify introduction points for bias in the AI life cycle and tools for addressing it, projects targeting sources of health inequity related to AI and the representativeness of data used in AI systems, and projects that develop quality assurance and ongoing monitoring mechanisms for AI tools.
KEY TAKEAWAYS
The joint paper demonstrates that the FDA recognizes the need to provide additional guidance on the use of AI in medical products. The FDA has already released a discussion paper on AI in Drug Manufacturing, the AI/ML SaMD Action Plan and the Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices: Guiding Principles. The joint paper, however, clearly signals more guidance is coming. Notably, the inclusion of the OCP in the paper suggests the FDA is cognizant that effective regulation of AI will require coordinating the jurisdictions of CDER, CBER and CDRH.
At the same time, the Centers acknowledge that this requires further development of the standards and tools used to evaluate AI systems and measure their performance. The joint paper suggests the FDA will provide the opportunity for stakeholders across the medical product life cycle to participate in shaping best practices and guidelines for monitoring AI-enabled medical products.
The joint paper is also part of broader federal action related to AI. It comes at a time of increased interest (and scrutiny) from US Congress on how federal agencies will balance AI regulation with continuing to spur innovation. This is acutely felt in the healthcare space. As policymakers continue to understand and grapple with an appropriate amount of regulation and as federal agencies continue to work on implementing the September 30, 2023, executive order on AI, stakeholders should analyze the implications. Specifically for this joint paper, developers, investors and users of AI-enabled medical products should begin considering their approach to AI to best position themselves to participate in shaping the FDA’s ongoing method for regulating AI in medical products and to comply with guidance as it develops.
For more information on recent FDA guidance on AI in medical products, contact the authors of this article or any other member of McDermott’s Food, Drug & Medical Device Regulatory Group. Additional resources are also available on McDermott’s AI in Healthcare resource center.