AI: What's Ahead in Board Oversight and Corporate Governance

Key Takeaways | AI and the Next Frontier: Understanding What’s Ahead in Board Oversight and Corporate Governance

Overview



During this webinar, Healthcare partners Jennifer Geetter and Michael Peregrine discuss why corporate governing boards and general counsels should leverage emerging AI guidance and regulatory frameworks to design their own framework for responsible oversight of AI development and deployment by their organizations. To do so, they can leverage emerging artificial intelligence (AI) guidance and regulatory frameworks.

 Top takeaways included:

  1. Boards of directors have an important role in helping institutions craft comprehensive and scalable approaches to the responsible development and deployment of AI tools. Although there is currently no single standard to guide boards on implementing effective AI policies, they can look to existing corporate governance approaches for direction. Boards should be involved in the strategic application of AI from the beginning, including coordinating the internal roles of appropriate compliance officers, focusing on trust and safety concerns, and being proactively engaged in mitigation if there are missteps. Clear communication, risk setting and values establishment at the board level provide comprehensive and consensus-seeking methods for guiding the company’s direction.
  2. Boards can start engaging with AI by identifying primary areas of focus for AI investment and deployment. They can also consider creating a specialized board committee with delegated powers to help integrate AI tools in a safe, successful way and begin an ongoing process of AI governance.
  3. Effective leadership starts, but does not end, with the board. Legal, compliance, risk management and other normative thought leaders need to implement scalable compliance strategies that make sense for the organization’s specific interactions with AI. This does not mean starting from scratch: An effective AI-centered compliance program should integrate with a broader compliance program that addresses adjacent issues (e.g., privacy and data strategies, FDA, marketing and other advertising- and claims-related considerations, research and development requirements, due diligence, and contracting and other legal liability considerations, among many others). While generative AI presents certain new ethical and legal questions, many AI issues can be effectively managed within or in concert with current compliance regimes.
  4. Establishing appropriate AI use standards is an important board responsibility. Compliance efforts should address when AI tools, rather than non-AI-enabled tools, are appropriate for a particular task. There is a sense of “AI FOMO” that can push companies to adopt AI solutions or seek to brand solutions as “AI powered” (or similar representations) for fear of missing out on the current AI moment. But this creates avoidable compliance headaches, and a sober, deliberate strategy for the adoption of AI tools is essential.
  5. Companies should start small in this developing space. If adopting or developing AI tools is appropriate, companies may want to consider beginning with low-risk/high-reward AI implementations, such as back-office support tools, which allow the workforce to use the technology while the compliance team begins creating an effective regulatory structure. We are all experiencing just the first chapter of AI, and creating scalable compliance programs will pay dividends.
  6. Company-wide AI literacy is essential. Creating common understandings, establishing key distinguishing definitions, and implementing a corporate glossary to help workforce members feel more comfortable with AI terms are necessary steps in creating an effective compliance structure and helping define which tools and processes are (and are not) appropriate for the business. AI goes far beyond generative AI, and the compliance structure should reflect AI’s diversity.
  7. Companies should self-regulate during today’s nascent AI regulation phase. While AI regulation efforts are underway at both the state and federal level, these efforts are in their infancy. For the foreseeable future, it is important for companies to self-regulate, both to protect themselves from unforeseen risks and to develop trust with customers. This includes creating measurable policies that appropriately regulate the use of AI, which will allow the organization to react and adapt as the technology and related laws progress. This is work of which lawyers and compliance teams can be proud.

Explore AI’s implications in more industries and service areas, and how you can seize opportunities and mitigate risk across the legal, business and operational areas of your organization, through our webinar series.

Webinar top takeaways prepared by Jacob Weissman, associate in McDermott’s Washington, DC office.

Dig Deeper

Paris, France / McDermott Event / April 2, 2025

European Health & Life Sciences Symposium 2025

Miami, FL / McDermott Event / March 5-6, 2025

HPE Miami 2025

San Francisco, CA / McDermott Event / January 14, 2024

McDermott Forum During the 2025 J.P. Morgan Healthcare Conference

Washington, DC / Speaking Engagements / December 4 – 5, 2024

5th Energy Transition Forum 2024

Get In Touch