AI: Understanding What's Ahead in Privacy & Cybersecurity

Key Takeaways | AI and the Next Frontier: Understanding What’s Ahead in Privacy and Cybersecurity

Overview



During this webinar, Global Privacy & Cybersecurity Partners Kathryn Linsky, Romain Perray and Stephen Reynolds shared best practices for businesses to prepare for the rapidly evolving risks and challenges posed by artificial intelligence (AI).

Top takeaways included:

  1. Understanding business goals: Companies should identify the AI they are currently using or plan to use. In this assessment, companies should also consider the types of decision-making by AI to determine the level of risk the AI presents. Once this assessment is complete, they should develop governance frameworks and policies to comply with applicable laws and regulations.
  2. Being transparent: Although comprehensive legislation is slow to develop in the United States, the patchwork of existing and newly introduced federal and state laws focuses on transparency. Companies using AI should prepare to disclose, in their privacy policies or other external-facing documents, how they are using AI, what personal information they are using to train their AI, where this personal information is coming from and the decisions being made by AI.
  3. Correcting biases and avoiding hallucinations and injection attacks: AI is susceptible to bias, generating output based on unreliable learned patterns instead of accurate data and providing misinformation. When training and using AI, be wary of using the information provided as a source of truth and presenting the information as such.
  4. Conducting risk/impact assessments: US states, like California, and the European Union are placing obligations on companies engaging with AI to conduct privacy risk assessments. The EU’s draft AI Act requires companies to perform an impact assessment that includes monitoring to reduce the risk of bias, while California recently released draft regulations on these assessments that define AI broadly. To comply with these regulations both in the US and the EU, companies should prepare to provide nonprivileged responses that can be shared with regulators regarding their training, use and processing of personal information for AI.
  5. Ensuring the highest level of protection in the EU: The EU’s approach to regulating AI is consistent with its general privacy-by-design approach. In addition to the safeguards in the General Data Protection Regulation for automated decision-making, the EU appears ready to adopt the AI Act by the end of the year or the beginning of next year with an effective date in 2026. If adopted, the AI Act will apply to companies both in and outside the EU and will impose various levels of transparency and accountability obligations depending on the risk the AI presents. Companies operating in the EU should begin to assess the level of risk of their AI to prepare for the AI Act.

Explore AI’s implications in more industries and service areas, and how you can seize opportunities and mitigate risk across the legal, business and operational areas of your organization, through our webinar series.

Dig Deeper

Coral Gables, FL / Speaking Engagements / November 13-15, 2024

Consero's Chief Privacy Officer Forum

New Orleans, LA / Speaking Engagements / November 6-8, 2024

Cambridge Forum on Health Data Privacy & Emerging Issues

Washington, DC / Speaking Engagements / October 23-25, 2024

Privacy + Security Forum Fall Academy 2024

Get In Touch