Overview
2024 is shaking up to be the year that many hospitals and health systems are turning their artificial intelligence (AI) planning into action, making it a pivotal time to establish effective AI governance. During this webinar, presenters Jiayan Chen, Jennifer Geetter, Abigail Kagan and Alya Sulaiman discussed considerations for hospitals and health systems implementing AI tools, spanning governance, research, data use and employment.
Top takeaways included:
- AI has the ability to respond to some of the most pressing challenges facing hospitals and health systems, including by streamlining administrative workflows, improving patient experience and engagement, and addressing provider burnout and empathy fatigue. Organizations should be prepared to embrace and steer the use of AI, rather than attempt to stop it.
- The risks of AI are both universal and specific. Some risks, such as model drift, are not specific to healthcare, while others like automation bias are critical for healthcare. Regulators are increasingly concerned about inadequate oversight of AI tools in the healthcare context, including transparency and risk management for AI tools.
- An AI governance framework enforces organizational priorities through standardized requirements and processes. AI governance is more than a one-size-fits all policy. A well-designed AI governance framework takes a comprehensive approach to AI risk management when evaluating, developing, procuring and deploying AI.
- Key ingredients for an AI governance framework include a cross-functional, multi-stakeholder governance committee; an AI acceptable use standard or policy to clarify how personnel can use AI in their day-to-day work; AI training to upskill stakeholders and create AI literacy; and plans for ongoing AI oversight and monitoring to identify and help mitigate AI-related risks.
- The challenges related to use of health information to develop or train AI are not entirely novel – providers already have to contend with similar issues, such as necessary consents, compliance obligations and allocation of risk. However, the use of health information for AI-related purposes generates concern because of the size of the data sets used and the ongoing iterative nature of AI training, development and model improvement.
- There is a common fallacy that “teaching” an AI model is simply teaching a computer and does not constitute research – however, developing an AI model with identifiable data can constitute research even before safety testing begins. A variety of tools can assist organizations in making this assessment when AI projects fall on the line between research and healthcare operations (as defined under HIPAA). Critical to this issue is investment in AI literacy for institutional review boards (IRBs), empowering internal IRBs to understand AI terminology and the important questions to ask, and pressure testing external IRB expertise in AI development.
- The use of AI in employment law and human resources (HR) processes necessarily involves certain legal hazards related to “black-box” decision-making for sensitive topics and the “Big Brother” aspects of electronic monitoring. To overcome these obstacles, HR teams can take steps such as addressing the HR team’s own potential biases through trainings; assessing the quality of information entered into AI systems and other “black-box” decision-makers; determining chatbot accessibility; and focusing tools on increasing the diversity of applicant pools rather than hiring quotas. Organizations also should actively communicate with their employees about acceptable uses of AI tools.
- Thoughtfulness is key to taking on the challenges and risks presented by AI. With AI, everyone is learning on the job – organizations and their leaders should be prepared to pause and change.