Overview
During this webinar, Jennifer Geetter, Sharon Lamb and Alya Sulaiman discussed opportunities and challenges at the intersection of artificial intelligence (AI) and healthcare administration and delivery. Their observations focused on how different types of healthcare stakeholders (providers, plans, technology developers, life sciences companies, etc.) can be thinking about AI risks in and steps to manage those risks.
Top takeaways included:
- Rate your risks: Risk management frameworks for AI in healthcare should be calibrated appropriate to the level of risk posed by the particular technology and the context of use. AI is being deployed in a variety of ways across the health ecosystem (e.g., diagnostic AI for radiology and imaging, clinical decision support, chatbots for patient engagement and education, ambient intelligence to streamline documentation, task and patient management, laboratory analysis and claims management). Not all AI risks are created equal across these use cases.
- AI can fail for different reasons, with different impacts and different legal effects: In navigating how and when to deploy AI, it is critical to define the problem you want to solve, what it looks like to solve that problem successfully and safely, and identify how the solution could fail and for whom (e.g., patients, health plan members, clinicians or employees relying on the tools). There are a variety of significant legal issues associated with AI and computer learning, along with many forms of liability for AI used in healthcare.
- Governance and compliance frameworks matter: Developing a board governance and sub-board compliance framework for AI is important to manage risk and create business opportunities. A common feature of prominent governance frameworks for AI is the need for leadership and safeguards to start at the top – the board cannot (and should not) do it all, but does needs to establish responsible AI principles consistent with their fiduciary obligations. Another common feature is the need to minimize risks – but how to do that is challenging. A first step is to itemize risks in terms of AI use cases, technological capabilities and limitations, and deployment models. A second step is to calibrate risks and benefits to determine where risks occur and what risks are acceptable.
- Go for the (early) win: As organizations begin to tackle AI, they can identify early “wins” – e.g., administrative, non-clinical use cases that are high reward but low risk – to build trust in AI across their organization and with their stakeholders. High impact, low risk use cases have a secondary benefit of creating an opportunity to refine and implement AI governance principles and frameworks.
- Generative AI technologies are increasingly available across healthcare: Although healthcare businesses have used AI technology for many years, generative AI is introducing new capabilities across healthcare at an unmatched pace and scale. Many generative AI models are actively being used to change the way electronic health records and administrative healthcare processes work. Generative AI is a type of AI that refers to deep learning models that can generate text, images and other content-based training data. Generative AI poses unique challenges in healthcare use cases due to risks of algorithmic bias, unexplainable outputs, introduction of new security risk vectors and safety concerns.
- Location, location, location: Regulatory frameworks that impact the development, deployment and ongoing use of AI can differ across borders and, in the United States, across states. For example, the question of when AI or other software is a medical device is answered similarly but with certain key differences under the regulatory frameworks of the US Food and Drug Administration and the UK and EU Medical Device Regulations. Organizations should be aware of the evolving global policy and regulatory environment for AI, including the soon-to-be finalized EU AI Act.
- Where to start: There are many steps organizations can consider when planning to invest in, procure, deploy and manage AI. For example, the National Institute of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework (AI RMF 1.0) provides a series of questions and standards organizations can use to develop their own AI risk framework. More broadly, organizations can invest in AI literacy and cross-functional education to empower employees to feel more comfortable with AI. This could take the form of developing a common vocabulary within the organization for key AI concepts; creating ongoing awareness of key AI technologies and authorized use cases; empowering internal institutional review boards (IRBs) and ethics committees to assess AI use cases and ask important questions; or pressure-testing external IRB expertise with considering the applicability of research regulations and norms to the teaching and development of AI tools.
- Organizations should develop a diligence checklist of questions to ask when procuring health AI tools and systems. Examples include:
- Does the tool truly incorporate AI and if so, what kind? Do we actually need an AI-enabled tool to solve our problem?
- Do we have the rights to use and share data to develop, train, deploy and enhance AI models? How will AI tool developers use our data?
- Is our AI use case/context regulated?
- Do we have an AI governance program in place? What about acceptable use standards?
- Who owns AI inputs, outputs or algorithmic model training weights? Who can actually protect it?
- Does our license agreement or contract with AI vendors include appropriate/sufficient rights, transparency requirements and related commitments (e.g., indemnification obligations)?
For more information on AI developments in healthcare, visit McDermott’s AI in Healthcare Resource Center.