AI In Healthcare Resource Center - McDermott Will & Emery

AI IN HEALTHCARE

LAW CENTER

Artificial Intelligence (AI) is not new to healthcare, but emerging generative AI tools present a host of novel opportunities and challenges for the healthcare sector. For years, our global, cross-practice team and health law leaders have guided innovative companies to use AI and machine learning (ML) in the delivery and support of patient care and the provision of cutting-edge tools in the health and life sciences industries. We continue to closely monitor the state of AI in healthcare to help you maximize the value of data and AI with an eye toward compliance in this rapidly evolving regulatory landscape.

We’ve curated links to key AI-related resources from legislative and executive bodies, government agencies and industry stakeholders around the world in one convenient place so you can stay current on these important issues and actively shape the AI policy landscape. This resource center will be updated with the latest developments, as well as insights and analyses from our team.

Subscribe now to receive updates, and please get in touch with us to discuss how your organization is developing or deploying AI/ML solutions.

MORE FROM McDERMOTT

GOVERNMENT RESOURCES

United States

  • HHS Chief AI Officer Website
    • Summary: The US Department of Health and Human Services (HHS) Office of the Chief Artificial Intelligence Officer (OCAIO) aims to facilitate effective collaboration on AI efforts across HHS agencies and offices. The site outlines HHS’s AI strategy, highlights AI accomplishments and priorities at HHS, and provides an inventory of HHS AI use cases. The site also provides a collection of AI-focused laws, regulations, executive orders and memoranda driving HHS’ AI efforts.
  • August 2023 | HHS Releases Information on Growing Artificial Intelligence Use Case Inventory
    • Summary: Executive Order 13960, “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” requires agencies to prepare an inventory of non-classified and non-sensitive current and planned AI use cases. The agency’s updated FY 2023 AI use case inventory shows 163 instances of the technology being operated, implemented, or developed and acquired by the agency. For example, CMS is in the “initiation” stage for a drug cost anomaly detection tool and the “operation and maintenance” stage for a Risk Adjustment Payment Integrity Determination System; the FDA is in the “implementation” stage for an AI-based Natural Language Processing for FDA labeling documents. A breakdown of the number of AI tools used by HHS, by agency, is provided below. A downloadable CSV file with the full inventory of tools is available here.
  • September 2021 | HHS Trustworthy AI Playbook
    • Summary: Published by the HHS OCAIO in September 2021, the Trustworthy AI Playbook includes HHS-specific guidance on major trustworthy AI concepts and how HHS leaders can confidently develop, use and deploy AI solutions. The playbook also lists the current statutory authorities that HHS believes it can use to regulate AI in healthcare (see Appendix III in the linked playbook). OCAIO also released an executive summary of the playbook. Of particular interest is the below graphic showing HHS’s directive to staff on how to regulate AI.
      HHS Regulatory Considerations
  • November 8, 2023: FDA, Health Canada and MHRA Announce Guiding Principles for Predetermined Change Control Plans for Machine-Learning Enabled Medical Devices
    • Summary: FDA, Health Canada, and the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) announced the five jointly identified guiding principles for predetermined change control plans (PCCPs) for machine-learning enabled devices. The term PCCP describes a plan, proposed by a manufacturer, that specifies certain planned modifications to a device, the protocol for implementing and controlling such modifications, and the assessment of impacts from such modifications. The document is intended to outline foundational considerations that highlight the characteristics of robust PCCPs and encourage international harmonization. The five guiding principles are: (1) focused and bounded; (2) risk-based; (3) evidence-based; (4) transparent; and (5) total product lifecycle perspective. The five guiding principles for PCCPs draw upon the 10 ten Good Machine Learning Practice (GMLP) guiding principles developed by FDA, Health Canada, and MHRA in 2021. Stakeholders can provide feedback through the FDA public docket (FDA-2019-N-1185).
    • Digital Health Center of Excellence
      • Summary: The Digital Health Center of Excellence (DHCoE) was created by FDA to empower stakeholders to advance healthcare by fostering responsible and high-quality digital health innovation. DHCoE is responsible for aligning and coordinating digital health work across the FDA. This site provides links to a variety of resources, draft guidance and events related to AI/ML.
  • Artificial Intelligence at CMS
    • Summary: This site offers a starting point for stakeholders interested in any aspect of AI at CMS. It provides links to foundational governance documents on AI, noting that those who wish to engage in AI-related activities, whether as a CMS employee, partner or vendor, should be aware of federal policies regarding the application of AI. The site also provides details on AI programs and initiatives at CMS.
  • Blog Series: Artificial Intelligence & Machine Learning
    • Summary: This blog series explores current and potential uses of AI, predictive models and machine learning algorithms in healthcare, and the role that ONC can play in shaping their development and use. Topics covered include increasing transparency and trustworthiness of AI in healthcare, minimizing risks and rewards of machine learning, and risks posed to patients by these technologies.
  • January 25, 2024 | FTC Hosts Virtual Summit on Artificial Intelligence
    • Summary: The FTC Office of Technology (OT) hosted a Tech Summit to bring together a diverse set of perspectives across academia, industry, civil society organizations and government agencies for a series of conversations on AI. The summit consisted of several remarks from FTC regulators and panels on chips and cloud infrastructure, data and models, and consumer applications. Key takeaways include:
      • FTC announced that it is issuing compulsory orders to five major technology companies requiring them to provide information about their AI partnerships.
      • Speakers were concerned about consolidation in the chips and cloud infrastructure markets.
      • Data privacy and model transparency were emphasized as important anti-trust enforcement areas.
      • Regulators tied market consolidation to lower quality and higher cost products, privacy violations and discrimination.

    Contact our team for a complete summary of the virtual summit.

  • January 9, 2024 Technology Blog | AI Companies: Uphold Your Privacy and Confidentiality Commitments
    • Summary: FTC discusses “model-as-a-service” companies, which develop and host AI models to make such models available to third parties via an end-user interface or an application programming interface. FTC stated in the blog post that, like most AI companies, model-as-a-service companies have a continuous desire for data to develop new or customer specific models or refine existing ones. FTC warns that the desire for continuous data can be at odds with a company’s obligation to protect user’s data, undermining peoples’ privacy or resulting in the appropriation of a firm’s competitively significant data. The blog warns that model-as-a-service companies that fail to abide by their privacy commitments to their users and customers (regardless of how or where such commitment was made), or engage in misrepresentations, material omissions, or misuse of data, may be liable under the laws enforced by FTC. The blog provides several examples of FTC enforcement actions stemming from these kinds of conduct.
  • May 1, 2023 Business Blog | The Luring Test: AI and the Engineering of Consumer Trust
    • Summary: FTC provides tips for mitigating risk of FTC Act violations arising from unfairness in the development or use of AI tools.
  • March 20, 2023 Business Blog | Chatbots, Deepfakes, and Voice Clones: AI Deception for Sale
    • Summary: FTC discusses deception in generative AI tools and key questions developers should ask.
  • February 27, 2023 Business Blog | Keep Your AI Claims in Check
    • Summary: FTC offers insight into its thought process when evaluating whether AI marketing claims are deceptive.
  • June 16, 2022 Report to Congress | Combatting Online Harms Through Innovation
    • Summary: On June 16, 2022, the FTC submitted its report to Congress required by the 2021 Appropriations Act discussing whether and how AI could be used to identify, remove or take other appropriate action to address online harms. The report details seven areas within the FTC’s jurisdiction in which AI could be useful in combatting online harms and provides recommendations on reasonable policies, practices and procedures for such uses and for any legislation that may advance them.
  • April 19, 2021 Business Blog | Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI
    • Summary: FTC highlights key pillars in managing consumer protection risks associated with AI and algorithms, including transparency, explainability, fairness and accountability.
  • April 8, 2020 Business Blog | Using Artificial Intelligence and Algorithms
    • Summary: FTC offers tips for the truthful, fair and equitable development and use of AI and identifies Section 5 of the FTC Act, the Fair Credit Reporting Act and the Equal Credit Opportunity Act as statutory authorities for FTC enforcement activity.
  • NTIA.gov
    • Summary: This site serves as a starting point for stakeholders interested in NTIA policymaking activities, which are principally focused on helping to develop policies necessary to verify that AI systems work as they claim.
  • NIST.gov
    • Summary: This site serves as a landing page for stakeholders interested in NIST’s research and standards development activities regarding AI.
    • February 8, 2024 | Biden-Harris Administration Announces First-Ever Consortium Dedicated to AI Safety
      • Summary: On February 8, 2024, U.S. Secretary of Commerce Gina Raimondo announced the creation of the U.S. AI Safety Institute Consortium (AISIC). Housed under the NIST U.S. AI Safety Institute (USAISI), AISIC seeks to advance the safe and trustworthy development and deployment of artificial intelligence (AI) as outlined in President Biden’s landmark Executive Order. According to the announcement, AISIC will be involved in developing “guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.” AISIC’s membership includes over 200 member companies and organizations, including state and local governments and non-profit organizations.
    • December 19, 2023 | NIST Issues Request for Information to assist in the Safe, Secure, and Trustworthy Development and Use of AI
      • Summary: On December 19, 2023, NIST issued a Request for Information (RFI) to assist in its efforts to carry out several of its responsibilities under the Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. According to the release, the responses to the RFI will support NIST’s efforts to evaluate capabilities relating to AI technologies and develop a variety of guidelines called for in the Executive Order. The RFI specifically calls for information related to generative AI risk management, AI evaluation, red-teaming, reducing the risk of synthetic content, and the development and implementation of AI-related consensus standards, cooperation and coordination. Comments may be submitted via the Federal e-Rulemaking Portal, email or mail. Responses are due February 2, 2024.
    • NIST’s Responsibilities Under the October 30, 2023 Executive Order
      • Summary: This site provides an overview of NIST’s responsibilities under President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Executive Order directs NIST to develop guidelines and best practices to establish consensus-based industry standards that promote the development and deployment of safe, secure, and trustworthy AI systems, which NIST will carry out through several different activities as detailed on the site. NIST will release more information about its role under the Executive Order on November 9, 2023.
      • In addition to working with government agencies, NIST intends to engage with the private sector, academia, and civil society as it produces the guidance called for by the Executive Order. These activities will build and expand on existing NIST efforts in several of the areas covered in the Executive Order, such as generative AI and AI risk management. As part of this engagement, NIST is calling for participants in a new consortium supporting AI safety. The Artificial Intelligence Safety Institute Consortium will help develop tools to measure and improve AI safety and trustworthiness. Interested organizations with relevant technical capabilities should submit a letter of interest by December 2, 2023. More details on NIST’s request for collaborators are available in the Federal Register. NIST plans to host a workshop on November 17, 2023, for those interested in learning more about the consortium and engaging in the conversation about AI safety.
    • June 22, 2023, National Institute of Standards and Technology | Generative AI Public Working Group
      • Summary: U.S. Secretary of Commerce Gina Raimondo announced that the National Institute of Standards and Technology (NIST) is launching a new public working group on AI to help NIST develop key guidance to support organizations in addressing the special risks associated with generative AI and the opportunities and challenges associated with generative AI technologies. The public working group will comprise volunteers from the private and public sectors. Those interested in joining the NIST Generative AI Public Working Group should complete this form. Registration closes July 9, 2023.
    • Trustworthy & Responsible AI Resource Center
      • Summary: This platform supports and operationalizes the NIST AI Risk Management Framework (AI RMF) and accompanying playbook. It provides a repository of foundational content, technical documents, and AI toolkits, including standards, measurement methods and metrics, and data sets. Over time, it will provide an interactive space that enables stakeholders to share AI RMF case studies and profiles, educational materials and technical guidance related to AI risk management.
    • January 26, 2023 | NIST AI Risk Management Framework
      • Summary: On January 26, 2023, NIST released its “Artificial Intelligence Risk Management Framework (AI RMF)”. The framework is designed to equip stakeholders with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment and use of AI systems over time. The AI RMF is intended to be a living document that is to be reviewed frequently and updated as necessary. The framework has an accompanying playbook that provides suggested actions for achieving the outcomes laid out in the framework.

HEALTHCARE-SPECIFIC POLICY AND REGULATORY INITIATIVES

  • March 1, 2023 | CDER Paper/RFI on Artificial Intelligence in Drug Manufacturing (FR Notice)
    • Summary: On March 1, 2023, CDER issued a discussion paper soliciting public comments on areas for consideration and policy development associated with application of AI to pharmaceutical manufacturing. The discussion paper includes a series of questions to stimulate feedback from the public, including CDER and CBER stakeholder. The comment period closed on May 1, 2023.
  • September 28, 2022 | Final Guidance “Clinical Decision Support Software: Guidance for Industry and Food and Drug Administration Staff
    • Summary: This final guidance provides clarity on FDA’s oversight and regulatory approach regarding clinical decision support (CDS) software intended for healthcare professionals and the types of CDS functions that do not meet the definition of a device as amended by the 21st Century Cures Act. See our summary of the draft guidance here.
  • April 11, 2023 | Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Proposed Rule
    • Summary: This proposed rule includes proposals to “promote greater trust in the predictive decision support interventions (DSIs) used in healthcare to…enable users to determine whether predictive DSI is fair, appropriate, valid, effective and safe.” The proposed transparency, documentation and risk management requirements impact developers that participate in the ONC Health IT Certification Program and those that create predictive DSIs that are enabled by or interface with certified Health IT Modules. The proposed rule was published in the Federal Register on April 18, 2023. Comments were due on June 20, 2023.
  • January 2022 | AI Showcase: Seizing the Opportunities and Managing the Risks of Use of AI in Health IT
    • Summary: This showcase spotlights how federal agencies and industry partners are championing the design, development and deployment of responsible, trustworthy AI in health IT. Speakers included representatives from numerous federal agencies, Congress, the American Medical Association, academic health and research centers, and industry. The agenda, presentation slides and event recording are available to the public.
  • January 18, 2024 | WHO Releases AI Ethics and Governance Guidance for LMMs for Health
    • Summary: WHO released new guidance on the ethics and governance of LLMs used in healthcare. As LLMs are starting to be used for more specific health-related purposes, WHO outlined a variety of risks, such as inaccurate or biased information, models trained on poor quality or biased data, privacy risks, inaccessibility and unaffordability, automation bias, and cybersecurity risks, among others. Accordingly, the guidance outlines more than 40 recommendations for consideration by governments, technology companies and healthcare providers to promote the appropriate use of LLMs to protect population health.
  • October 19, 2023 | WHO Outlines Considerations for Regulation of AI for Health
    • Summary: WHO released a new publication listing key regulatory considerations for the use of AI for health in response to the growing need to responsibly manage the proliferating AI technologies in healthcare. It highlights 18 considerations across the following six categories: (1) documentation and transparency; (2) risk management and AI systems development lifecycle approach; (3) intended use and analytical and clinical validation; 4) data quality; (5) privacy and data protection; and (6) engagement and collaboration. The publication emphasizes the importance of establishing AI systems’ safety and effectiveness, rapidly making appropriate systems available to those who need them and fostering dialogue among stakeholders. The publication is intended to be a listing of key regulatory considerations and a resource that can be considered by all relevant stakeholders in medical devices ecosystems, including developers who are exploring and developing AI systems, regulators who might be in the process of identifying approaches to manage and facilitate AI systems, manufacturers who design and develop AI-embedded medical devices, health practitioners who deploy and use such medical devices and AI systems, and others working in these areas. The full publication can be accessed here.
  • May 16, 2023 | WHO Calls for Safe and Ethical AI for Health
    • Summary: The WHO released a statement calling for caution to be exercised in using AI-generated large language model tools such as ChatGPT for health-related purposes. The WHO expressed concern that the caution that would normally be exercised for any new technology is not being exercised consistently with large language model tools, including adherence to key values of transparency, inclusion, public engagement, expert supervision and rigorous evaluation, and enumerated several areas of concern with the use of such technology. The WHO proposed that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine health care and medicine, whether by individuals, care providers or health system administrators and policymakers.
  • June 28, 2021 | Report: Ethics and Governance of Artificial Intelligence for Health
    • Summary: The report identifies the ethical challenges and risks with the use of AI in healthcare, as well as six consensus principles to ensure AI works to the public benefit of all countries. The report is the product of input from experts in ethics, digital technology, law, human rights and health ministries. The WHO report includes a set of recommendations to governments and developers for oversight of AI in the delivery of healthcare, seeking to hold all stakeholders – in the public and private sectors – accountable and responsive to the healthcare workers who will rely on AI and the communities and individuals whose health will be affected by its use.

INDUSTRY AGNOSTIC POLICY INITIATIVES

United States

    • Summary: Following President Biden’s Executive Order (EO) on the Safe, Secure and Trustworthy Development of AI, the Department of Commerce released new guidance and software to help improve the safety, security and trustworthiness of AI systems, including the following:
      • The Department’s National Institute of Standards and Technology (NIST) released two new items:
          1. NIST’s AI Safety Institute has released the initial public draft of its guidelines on Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1), which outlines voluntary best practices for how foundation model developers can protect their systems from being misused to cause deliberate harm to individuals, public safety and national security. The guidelines offer seven key approaches for mitigating risks that models will be misused, along with recommendations for how to implement them and how to be transparent about their implementation. Comments are being accepted until Sept. 9, 2024, at 11:59 p.m to NISTAI800-1@nist.gov.
          2. A testing platform designed to help AI system users and developers measure how certain types of attacks can degrade the performance of an AI system. The open-source software is available for free download.
      • NIST also released three finalized guidance documents that were first released in April for public comment:
          1. The AI RMF Generative AI Profile (NIST AI 600-1), which can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities. This is intended to be a companion resource for users of NIST’s AI RMF.
          2. The Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST Special Publication (SP) 800-218A), designed to be used alongside the Secure Software Development Framework (SP 800-218) to address the training and use of AI systems.
          3. A Plan for Global Engagement on AI Standards (NIST AI 100-5), designed to drive the worldwide development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing.
      • The Department’s U.S. Patent and Trademark Office (USPTO) issued a guidance update on patent subject matter eligibility to address innovation in critical and emerging technologies, including AI.
    • Summary: The US Department of Commerce announced three actions it is taking to implement President Biden’s Executive Order on the Safe, Secure and Trustworthy Development of AI (EO). First, the department’s National Institute of Standards and Technology (NIST) released four draft publications, including draft guidance documents, intended to improve the safety, security and trustworthiness of AI systems. Second, NIST launched NIST GenAI, a challenge series to support development of methods to distinguish between human-produced and AI-produced content. Third, the department’s US Patent and Trademark Office (USPTO) published a request for public comment on how AI could affect whether an invention is patentable under US law due to changes in how the level of ordinary skills in the arts are evaluated. The full Department of Commerce announcement is available here.
    • Summary: The White House announced that federal agencies completed all 180-day actions mandated by President Biden’s October 2023 Executive Order (EO) on schedule. The announcement included an overview of key agency actions undertaken in three major categories—managing risks to safety and security, standing up for workers, consumers and civil rights and harnessing AI for good. Notable actions include the publication of guidance by the US Department of Health and Human Services (HHS) that establishes guardrails for the responsible use of AI in its benefits programs, promulgation of a final rule clarifying that nondiscrimination requirements in health programs apply to the use of AI and the development of a strategy outlining safety and effectiveness of AI for the healthcare sector. The full White House announcement is available here.
    • Summary: On February 13, 2024, the United States Patent and Trademark Office (USPTO) announced new guidance on the inventorship of AI-assisted inventions. This guidance is in fulfillment of the first of three directives placed on the USPTO by President Biden’s landmark Executive Order. The guidance serves to help stakeholders determine whether there is significant enough human contribution to an invention to which AI also contributed to qualify for a patent and provides instructions on the appropriate naming of inventor(s). The guidance provides that at least one human must make a significant enough contribution to be named as an inventor to qualify for a patent. The Under Secretary of Commerce for Intellectual Property and Director of the USPTO, Kathi Vidal, clarified that the question of inventorship will only be raised under examination if an examiner determines from the filed record or extrinsic evidence that one or more of the named inventors may not have invented the claimed subject matter. Comments are due by May 13, 2024.
    • Summary: On February 8, 2024, U.S. Secretary of Commerce Gina Raimondo announced the creation of the U.S. AI Safety Institute Consortium (AISIC). Housed under the NIST U.S. AI Safety Institute (USAISI), AISIC seeks to advance the safe and trustworthy development and deployment of artificial intelligence (AI) as outlined in President Biden’s landmark Executive Order. According to the announcement, AISIC will be involved in developing “guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.” AISIC’s membership includes over 200 member companies and organizations, including state and local governments and non-profit organizations.
    • Summary: The White House released a fact sheet detailing actions taken by the Biden-Harris administration to strengthen AI safety and security, consistent with several directives in the administration’s AI Executive Order (EO) issued in October 2023. The fact sheet highlights that the administration has completed all of the 90-day actions outlined in the EO and advanced other directives that the EO tasked over a longer timeframe. The progress update highlights that the administration has used Defense Production Act authorities to compel some AI system to report information, including AI safety test results, to the Department of Commerce. In addition, the Department of Commerce has proposed a draft rule that proposes to compel US cloud companies that provide computing power for foreign AI training to report these activities. The administration has also completed risk assessments covering AI’s use in every critical infrastructure sector. The White House AI Council, consisting of top officials from a wide range of federal departments and agencies, has been convened to oversee these efforts. You can read more about the AI EO’s impact on healthcare and future implementation deadlines in our On the Subject.
    • Summary: The White House announced a voluntary “commitment” from 28 healthcare provider and payor organizations who develop, purchase and implement AI-enabled technology for their own use in healthcare activities to ensure AI is deployed safely and responsibly in healthcare. The fact sheet is available here. Specifically, these companies are committing to:
        1. Developing AI solutions to optimize healthcare delivery and payment by advancing health equity, expanding access, making healthcare more affordable, improving outcomes through more coordinated care, improving patient experience and reducing clinician burnout.
        2. Working with their peers and partners to ensure outcomes are aligned with fair, appropriate, valid, effective and safe (FAVES) AI principles, as established and referenced in HHS’s Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) rule.
        3. Deploying trust mechanisms that inform users if content is largely AI-generated and not reviewed or edited by a human.
        4. Adhering to a risk management framework for applications powered by foundation models.
        5. Researching, investigating and developing AI swiftly but responsibly.

     

    • Summary: The White House announced a voluntary “commitment” from seven leading AI companies to help advance the safe, secure and transparent development of AI technology. Specifically, the companies are committing to:
      1. Internal and external security testing of their AI systems prior to release
      2. Sharing information on managing AI risks with industry, government, civil society and academia
      3. Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights
      4. Facilitating third-party discovery and reporting of vulnerabilities in their AI systems
      5. Developing robust technical mechanisms, such as watermarking systems, to ensure that users know when content is AI generated
      6. Publicly reporting the capabilities, limitations and areas of appropriate and inappropriate use of their AI systems
      7. Prioritizing research on the societal risks posed by AI technology, including avoiding harmful bias and discrimination, and protecting privacy and
      8. Developing and deploying advanced AI systems to help address important societal challenges.

      The White House also announced that the Office of Management and Budget (OMB) will soon release draft policy guidance for federal agencies to ensure that the development, procurement and use of AI systems centers around safeguarding the American people’s rights and safety.

    • Summary: Representatives Ted Lieu (D-CA), Ken Buck (R-CO), and Anna Eshoo (D-CA), along with Senator Brian Schatz (D-HI), introduced a bill that would create a National Commission on Artificial Intelligence (AI). The National AI Commission Act would create a national commission to focus on the question of regulating AI and will be tasked with reviewing the United States’ current approach to AI regulation, making recommendations on any new office or governmental structure that may be necessary, and developing a risk-based framework for AI. The group would be comprised of experts from civil society, government, industry and labor, and those with technical expertise coming together to develop a comprehensive framework for AI regulation, and include 20 commissioners, of whom 10 will be appointed by Democrats and 10 by Republicans. A one-pager is here. The full bill text is here. The bill would also require three reports:
      • Interim report: At the six-month mark, the commission will submit to Congress and the President an interim report, which will include proposals for any urgent regulatory or enforcement actions.
      • Final report: At the year mark, the commission will submit to Congress and the President its final report, which will include findings and recommendations for a comprehensive, binding regulatory framework.
      • Follow-up report: One year after the final report, the commission will submit to Congress and the President a follow-up report, which will include any new findings and revised recommendations. The report will include necessary adjustments pertaining to further developments since the final report’s publication.
    • Summary: The White House announced new actions to advance the research, development and deployment of responsible AI. The announced actions include an updated National AI Research and Development Strategic Plan, updated for the first time since 2019 to provide a roadmap that outlines key priorities and goals for federal investments in AI research and development. The announcements also include a new Request for Information (RFI) on National Priorities for Artificial Intelligence to inform the Administration’s ongoing AI efforts, as well as a new report from the US Department of Education’s Office of Educational Technology on AI and the Future of Teaching and Learning: Insights and Recommendations, summarizing the risks (including algorithmic bias) and opportunities related to AI in teaching, learning, research and assessment. OSTP sought public comment on a variety of topics in the RFI, including questions relating to protecting rights, safety, and national security; advancing equity and strengthening civil rights; bolstering democracy and civic participation; promoting economic growth and jobs; and innovating in public services. Comments were due on June 7, 2023.
    • Summary: The President’s Council of Advisors on Science and Technology launched a working group on generative AI to help assess key opportunities and risks, and provide input on how best to ensure that these technologies are developed and deployed as equitably, responsibly, and safely as possible. The working group, which held its most recent public meeting on Friday, May 19, 2023, invites submissions from the public on how to identify and promote the beneficial deployment of generative AI, and on how best to mitigate risks. The call for submissions outlines five specific questions for which the working group is seeking responses. Submissions were due August 1, 2023.
    • Summary: The White House announced new actions to further promote responsible American innovation in AI and protect people’s rights and safety. The actions include announcing $140 million in funding to launch seven new National AI Research Institutes, an independent commitment from leading AI developers to participate in a public evaluation of AI systems and draft policy guidance by the Office on Management and Budget on the use of AI systems by the US government for public comment. The White House noted that these steps build on the administration’s previous efforts to promote responsible innovation, including the Blueprint for an AI Bill of Rights and related executive actions announced in Fall 2022, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier in 2023.
    • Summary: The Civil Rights Division of the United States Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission (FTC( and the US Equal Employment Opportunity Commission (EEOC) released their “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems,” which reiterates each agency’s commitment to applying existing legal authorities to the use of automated systems and innovative new technologies.
    • Summary: NTIA issued a request for comment on an AI accountability ecosystem. Comments were due on June 12, 2023. Specifically, NTIA sought feedback on what policies can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems.
    • Summary: This document establishes five principles and associated practices to support the development of policies and procedures to protect civil rights and promote democratic values in the design, use and deployment of AI systems.

LEGISLATIVE ACTIVITY

United States

  • Summary: Members of Congress, including Elizabeth Warren (D-MA), Judy Chu (D-CA), Jerrold Nadler (D-NY), and several others sent a letter to the Centers for Medicare & Medicaid Services (CMS) Administrator, Chiquita Brooks-LaSure, expressing concern about how MA plans use AI to guide coverage decisions, arguing that such plans use AI tools to wrongly deny care and contradict provider evaluations. Because of this, the lawmakers believe more guidance is needed to protect access to care for Medicare beneficiaries, and outlined several specific measures CMS should implement to ensure recent CMS regulations and policies align with this goal.

    • Summary: On February 20, 2024, Speaker of the House Mike Johnson (R-LA) and House Democratic Leader Hakeem Jeffries (D-NY) announced the creation of a bipartisan Task Force on Artificial Intelligence (AI). The task force consists of 24 members representing key committees of jurisdiction, with 12 appointed by each party leader. It will be jointly led by Chair Jay Obernolte (R-CA) and Co-Chair Ted Lieu (D-CA) and, according to the announcement, is tasked with producing a comprehensive report that will include guiding principles, forward-looking recommendations and bipartisan policy proposals developed in consultation with committees of jurisdiction. Obernolte also specified in the announcement that the report is expected to detail the regulatory standards and congressional actions needed to both protect consumers and foster continued investment and innovation in AI. The timing of the report was not specified.
    • Summary: The White House released a fact sheet detailing actions taken by the Biden-Harris administration to strengthen AI safety and security, consistent with several directives in the administration’s AI Executive Order (EO) issued in October 2023. The fact sheet highlights that the administration has completed all of the 90-day actions outlined in the EO and advanced other directives that the EO tasked over a longer timeframe. The progress update highlights that the administration has used Defense Production Act authorities to compel some AI system to report information, including AI safety test results, to the Department of Commerce. In addition, the Department of Commerce has proposed a draft rule that proposes to compel US cloud companies that provide computing power for foreign AI training to report these activities. The administration has also completed risk assessments covering AI’s use in every critical infrastructure sector. The White House AI Council, consisting of top officials from a wide range of federal departments and agencies, has been convened to oversee these efforts. You can read more about the AI EO’s impact on healthcare and future implementation deadlines in our On the Subject.
    • Summary: Senators Elizabeth Warren (D-MA), Michael Bennet (D-CO), Lindsey Graham (R-SC), and Peter Welch (D-VT) sent a letter to Senate Majority Leaders Chuck Schumer (D-NY) expressing support for a new independent federal agency to oversee and regulate large technology firms. The letter comes on the heels of the Schumer-led AI Insight Forum series, where the Senators say participants made clear Congress must regulate AI. Further, the letter cites the need for one agency to be acting across sectors, as opposed to a potentially fragmented approach across numerous federal agencies.
      As mentioned in the press release, Warren, Bennet, Graham, and Welch have all introduced legislation to create a dedicated agency to regulate dominant digital platforms on behalf of the American people. Last year, Graham and Warren introduced the Digital Consumer Protection Commission Act to establish a new commission to regulate online platforms, promote competition, protect privacy, protect consumers and strengthen national security. In 2022, Bennet and Welch introduced the Digital Platform Commission Act to create an expert federal agency able to regulate digital platforms to protect consumers, promote competition and defend the public interest.
    • Summary: Rep. Greg Murphy (R-NC), co-chair of the GOP Doctors Caucus, released a letter to the FDA Commissioner with various questions regarding FDA’s oversight of products that include AI. The letter cites the progress that has been made and the ongoing debate on AI regulatory policy in the European Union, and a desire to ensure FDA has the authority and tools it needs to cultivate an environment that advances innovation in the development and use of AI in healthcare. The letter includes questions on topics such as FDA’s plans for reviewing 510(k) and PMA requests for ML and AI devices, and FDA’s views on voluntary alternative pathways for approval of AI products and integrated devices, a liability safe harbor for AI-enabled devices and physicians using such devices, and the role that stakeholders such as medical licensing bodies, credentialing bodies and medical societies should play in establishing standards for use of AI.
    • Summary: The House Energy and Commerce Committee held a hearing titled, “Leveraging Agency Expertise to Foster American AI Leadership and Innovation,” to explore concerns and opportunities related to the development and use of AI, emphasizing the need for federal oversight and safeguards. Members discussed the implications of AI deployment in various sectors, including healthcare, privacy, and national security. The hearing focused on evaluating existing executive orders, legislative proposals, and strategies to strike a balance between harnessing AI’s transformative power and ensuring responsible, secure, and ethical AI development and implementation. Key health-related issues discussed included the importance of a comprehensive data privacy standard that addresses health information outside of traditional health records protected by HIPAA; concerns regarding the integration of AI in healthcare; and FDA’s active role in approving AI applications in healthcare, including FDA’s close coordination with ONC.
    • Summary: On November 15, 2023, Senators John Thune (R-SD), Amy Klobuchar (D-MM), Roger Wicker (R-MS), John Hickenlooper (D-CO), Shelley Moore Capito (R-WV), and Ben Ray Luján (D-NM), members of the Senate Committee on Commerce, Science, and Transportation, introduced the Artificial Intelligence (AI) Research, Innovation, and Accountability Act of 2023. The bill requires the National Institute of Standards and Technology (NIST) to facilitate new standards development and develop recommendations for risk-based guardrails. The bill also provides for increased transparency notifications regarding generative AI to users by large internet platforms, the performance of detailed risk assessments by companies deploying critical-impact AI, and certain certification frameworks for critical-impact AI systems. The bill also requires the Commerce Department to establish a working group to provide recommendations for the development of voluntary, industry-led consumer education efforts for AI systems. Additional information on the bill is available here.
    • Summary: Senate Finance Committee leaders, Ron Wyden (D-OR) and Mike Crapo (R-IN), released the text of letters that the senators sent to three agencies, including the US Department of Health and Human Services (HHS), asking for information on how the agencies are currently utilizing AI and what steps they are taking to ensure AI is used appropriately. The letter to HHS highlights the benefits and risks associated with the use of AI tools, including predictive algorithms, rules-based automated systems and more advanced AI systems, such as generative AI. It also includes 20 questions in the following categories: clinically appropriate adoption and deployment of AI tools; deploying AI to ensure program integrity; AI as a tool to improve outcomes for families; utilization of AI by state Medicaid agencies; utilization of AI by Medicare Advantage plans; and cross-agency collaboration and leveraging private-public partnerships. The letter requests one or more staff briefings beginning on November 2, 2023, to further the senator’s understanding of how HHS and its sub-agencies employ AI-enhanced tools, as well as how federal government programs approach coverage, reimbursement, and regulation of AI tools and the services they facilitate.
    • Summary: In response to President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, OMB announced it is releasing for comment a new draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. The draft policy focuses on three main pillars: (1) strengthening AI governance structures in federal agencies; (2) advancing responsible AI innovation; and (3) managing risks from the use of AI by directing agencies to adopt mandatory safeguards for the development and use of AI that impacts the rights and safety of the public. The comment period closed on December 5, 2023.
    • Summary: Lead by Senator Ed Markey and Representative Pramila Jayapal, 15 lawmakers released a letter urging President Biden to incorporate the White House Blueprint for an AI Bill of Rights into his upcoming AI Executive Order or subsequent executive orders. The letter states: “By turning the AI Bill of Rights from a non-binding statement of principles into federal policy, your Administration would send a clear message to both private actors and federal regulators: AI systems must be developed with guardrails. Doing so would also strengthen your Administration’s efforts to advance racial equity and support underserved communities, building on important work from previous executive orders. As a substantial purchaser, user, and regulator of AI tools, as well as a significant funder of state-level programs, the federal government’s commitment to the AI Bill of Rights would show that fundamental rights will not take a back seat in the AI era. Finally, implementing these principles will not only protect communities harmed by these technologies, it will also help inform ongoing policy conversations in Congress and show clear leadership on the global stage.”
    • Summary: Senator Ron Wyden, chair of the Senate Finance Committee, Senator Cory Booker, and Representative Yvette Clarke introduced the Algorithmic Accountability Act of 2023, which regulates AI systems making “critical decisions,” defined to include decisions regarding health care. The bill requires companies to conduct impact assessments for effectiveness, bias and other factors when using AI to make critical decisions, and report select impact assessment documentation to the FTC. The bill also tasks the FTC with creating regulations that provide assessment instructions and procedures for ongoing evaluation, requires the FTC annually publish an anonymized report to create a public repository of automated critical decision data, and provides the FTC with resources to hire 75 staff and establish a Bureau of Technology to enforce the law. The sponsors stated the bill is intended to be “a targeted response to problems already being created by AI and automated systems.” A summary of the bill is here. The full bill text is here.
    • Summary: On September 12, 2023, the Senate Judiciary Subcommittee on Privacy, Technology, and the Law held a hearing titled “Oversight of A.I.: Legislating on Artificial Intelligence.” The purpose of the hearing was to discuss oversight of AI, including the harms and benefits of AI both domestically and internationally. Committee members examined potential safeguards and oversight efforts that will balance innovation and privacy protections. Contact us to request a full summary of the hearing. Key takeaways include:
      1. AI is expanding in use and, with that, poses a potential harm to national security.
      2. Concerns were expressed about AI misinformation in the upcoming 2024 election, the negative impacts of AI on children, and the harm of data collection and AI use overseas with different regulations than in the U.S.
      3. Concerns were expressed regarding loss of jobs to AI, though there was also recognition of the economic benefits of AI.
      4. Committee members emphasized the need for bipartisan support for carefully crafted legislation.
    • Summary: Senate HELP Ranking Member Bill Cassidy (R-LA) released a request for information on ways to improve privacy protections of health data to safeguard sensitive information, including questions related to the impact of AI on health data. Specifically, questions relate to: (1) the privacy challenges and benefits AI poses for entities that collect, maintain, or disclose health care data, whether within the HIPAA framework or without; (2) how AI-enabled software and applications implement privacy by design, and what can be done to mitigate privacy vulnerabilities when developing algorithms for health care purposes; and (3) the extent to which patients should be able to opt-out of datasets used to inform algorithmic development, including how opt-out mechanisms should be structured. Senator Cassidy intends to use stakeholder feedback to help inform modernizing the Health Insurance Portability and Accountability Act (HIPAA). Comments on the the RFI were due on September 28, 2023.
    • Summary: Senate HELP Ranking Member Bill Cassidy (R-LA) released a white paper entitled “Exploring Congress’ Framework for the Future of AI: The Oversight and Legislative Role of Congress Over the Integration of Artificial Intelligence in Health, Education, and Labor”, which examines considerations for Congress to take into account for potential regulation of AI. The white paper includes background information how AI is used today in healthcare, education, and labor, and potential risks as AI continues to rapidly evolve, as well as the need for a flexible regulatory environment framework that allows for regulating AI under specific use cases, rather than a one-size-fits-all approach. The white also seeks stakeholder feedback on a series of questions for AI, including the use of AI in health care related to supporting medical innovation, as well as medical ethics and protecting patients. Comments on the white paper were due on September 22, 2023.
    • Summary: Senate Majority Leader Chuck Schumer released a Dear Colleague Letter on September 1, 2023, which addressed upcoming policy priorities, including AI. The letter notes that in September, Leader Schumer will build on the Senate’s bipartisan efforts around AI. On September 13, Senator Schumer will convene the first in a series of bipartisan AI Insight Forums to give all Senators the opportunity to learn, expand, and build upon their knowledge of AI and stay ahead of AI’s rapid development. These forums will convene AI developers, civil rights and worker advocates, researchers, and other key thinkers to lay a foundation for action on AI, with the letter noting the bipartisan interest in developing a comprehensive AI framework to bolster AI innovation in a safe and responsible way. The letter notes that the Senate must “treat AI with the same level of seriousness as national security, job creation, and our civil liberties.”
    • Summary: Senate Majority Leader Chuck launched his SAFE Innovation Framework to regulate AI in a speech on June 21, 2023. The SAFE Innovation framework has five central pillars:
      1. Security: Protecting national security, as well as economic security for workers by mitigating and responding to job loss
      2. Accountability: Supporting the deployment of responsible systems to address concerns around misinformation and bias, as well as addressing concerns around intellectual property, copyright and liability
      3. Foundations: Requiring that AI systems align with democratic values, promoting AI’s societal benefits while avoiding the potential harms, and ensuring that the American government is a leader in writing the rules of the road on AI
      4. Explain: Determining what information the federal government needs from AI developers and deployers to be a better steward of the public good, and what information the public needs to know about an AI system, data or content
      5. Innovation: Supporting US-led innovation in AI, focusing on technologies that unlock the potential of AI and maintaining US leadership in the technology

      The speech also emphasized the need for Congress to address AI. Senator Schumer stated he would launch a series of “AI Insight Forums” in Fall 2023 featuring top AI developers, executives, scientists, community leaders, workers, national security experts and others, which will form the foundation for more detailed policy proposals for Congress. In addition, Senator Schumer has committees developing bipartisan legislation and a bipartisan gang of non-committee chairs working to further develop the Senate’s policy response. A one-page summary of the SAFE Innovation Framework is available here.

    • Summary: Representatives Ted Lieu (D-CA), Ken Buck (R-CO), and Anna Eshoo (D-CA), along with Senator Brian Schatz (D-HI), introduced a bill that would create a National Commission on Artificial Intelligence (AI). The National AI Commission Act would create a national commission to focus on the question of regulating AI and will be tasked with reviewing the United States’ current approach to AI regulation, making recommendations on any new office or governmental structure that may be necessary, and developing a risk-based framework for AI. The group would be comprised of experts from civil society, government, industry and labor, and those with technical expertise coming together to develop a comprehensive framework for AI regulation, and include 20 commissioners, of whom 10 will be appointed by Democrats and 10 by Republicans. A one-pager is here. The full bill text is here. The bill would also require three reports:
      • Interim report: At the six-month mark, the commission will submit to Congress and the President an interim report, which will include proposals for any urgent regulatory or enforcement actions.
      • Final report: At the year mark, the commission will submit to Congress and the President its final report, which will include findings and recommendations for a comprehensive, binding regulatory framework.
      • Follow-up report: One year after the final report, the commission will submit to Congress and the President a follow-up report, which will include any new findings and revised recommendations. The report will include necessary adjustments pertaining to further developments since the final report’s publication.
    • Summary The Senate Judiciary Subcommittee on Privacy, Technology, and the Law held a hearing entitled, “Oversight of A.I.: Rules for Artificial Intelligence.” The witnesses for the hearing were Sam Altman, CEO of OpenAI, Christina Montgomery, Chief Privacy and Trust Officer at IBM, and Gary Marcus, Professor Emeritus at New York University. The hearing focused on the explosion of AI, especially with the release of ChatGPT (developed by OpenAI). Senators on both sides of the aisle expressed concerns about the lack of regulatory oversight on the development and deployment of AI across various sectors, including financial and healthcare. Witnesses were in agreement that regulation is needed, although did not necessarily agree on what approach should be taken. Lawmakers compared the dramatic increase in use of ChatGPT (and comparable services) to the proliferation of social media and expressed concerns about delaying regulation until it had already caused too much harm.
    • Summary: Senator Schumer announced a collaboration with stakeholders to develop a legislative framework for regulating AI. This effort is expected to involve multiple congressional committees and will focus on four key areas of concern: “Who,” “Where,” “How” and “Protect.” The first three areas aim to inform users, provide the government with the necessary data to regulate AI technology and minimize potential harm. The final area, Protect, is dedicated to aligning these systems with American values and ensuring that AI developers fulfill their promise of creating a better world.
    • Summary: This resource tracks 2023 state legislation related to general AI issues, providing information and summaries of each bill. The National Conference of State Legislatures also has resources tracking state legislation related to AI from prior years, beginning in 2019.

INDUSTRY COMMENTARY

    • Summary: The Coalition for Health AI, an alliance of major health systems and tech companies, has released a blueprint to build trust in artificial intelligence’s use in healthcare. The blueprint focuses on framing risks, measuring impacts, allocating risk resources and strong governance.
    • Summary: The US Chamber of Commerce’s Artificial Intelligence Commission on Competitiveness, Inclusion, and Innovation released a comprehensive report on the promise of artificial intelligence, while calling for a risk-based regulatory framework that will allow for its responsible and ethical deployment. Notably, the Chamber release is clear this is a preferred alternative to the White House Office of Science and Technology Policy’s recent Blueprint on Artificial Intelligence. The report outlined several major findings from its year-long deep dive on AI.
    • Summary: The Connected Health Initiative’s Health AI Task Force released recommended principles to guide policymakers in taking action on AI. The principles cover areas such as research, quality assurance and oversight, access and affordability, bias, education, collaboration and interoperability, workforce issues, ethics, and privacy and security.

PAST EVENTS

The first installment of McDermott’s webinar series exploring tech trends impacting healthcare addresses how healthcare organizations can overcome uncertainty when navigating AI innovation and how best to implement and govern AI while remaining sensitive to associated risk management. Moving beyond general calls for AI governance, the panel provides practical steps organizations can take to assess AI tech opportunities, implement ongoing tech evaluation and quality control programs, build integrated compliance and engineering teams, and begin building a scalable, built-for-growth AI compliance framework.

During this webinar, McDermott and experts from McKinsey & Company explored the latest healthcare AI developments impacting investors and portfolio companies. We discussed a range of topics on AI, including the regulatory landscape and policy outlook, how investors can think about the next phase of investing in AI, and working through legal pain points. McKinsey speakers highlighted AI applications, use cases, and trends in the healthcare industry. The broader discussion also included considerations for AI governance and data rights, as well as value creation and value preservation in AI from a risk management standpoint.

2024 is shaking up to be the year that many hospitals and health systems are turning their artificial intelligence (AI) planning into action, making it a pivotal time to establish effective AI governance. During this webinar, we discuss considerations for hospitals and health systems implementing AI tools, spanning governance, research, data use and employment.

Healthcare policy leaders from McDermott+Consulting share their insights on the state of generative AI oversight by Congress and federal agencies and how companies can actively participate in the burgeoning AI policymaking process and the development of regulations governing AI in healthcare. They also provide tips on securing Medicare coverage for such innovative offerings.

Learn how healthcare providers and health services companies can seize the opportunities presented by generative AI and large language models while navigating the industry’s fragmented and complex regulatory landscape. We’ll explore which regulations from the US, EU and UK you should be watching, types of liability raised by health AI and offer practical steps your organization can take to develop, implement and acquire AI solutions.

As AI’s potential role in healthcare expands at an unprecedented rate, it is increasingly clear that no one-size-fits-all policy can address the complexities of this dynamic space. During this webinar, digital health strategists and algorithmic auditors from O’Neil Risk Consulting & Algorithmic Auditing (ORCAA), an algorithmic risks consulting company for AI and automated systems, unravel the challenges and provide the tools you need to conceptualize and manage these risks effectively.

GET IN TOUCH