Overview
On January 13, 2025, California Attorney General (AG) Rob Bonta issued two legal advisories: one for businesses generally (General Advisory) and one specific to healthcare entities (Health Advisory). These advisories identify existing California laws that already apply to artificial intelligence (AI). The advisories demonstrate the AG’s willingness to push the limits of applying existing law to AI and emphasize the need for businesses to follow several key principles:
- Use AI responsibly, ethically, and safely.
- Understand how AI systems are trained, what data they use, how they generate outputs, and the risks to individuals, the environment, competition, and the public.
- Be transparent about when AI is used and when data is used to train AI.
- Test, validate, and audit AI systems.
In Depth
PART ONE: GENERAL AI GUIDANCE
The first advisory describes how existing laws apply to AI, including laws related to consumer protection, unfair and fraudulent business practices, competition, discrimination and bias, and abuse of data.
California’s Unfair Competition Law
The AG asserted that AI can violate the Unfair Competition Law when used to deceive or harm consumers. Relying on the Unfair Competition Law’s “intentionally broad and sweeping language,” the AG provided several examples:
- False Advertising: Falsely advertising the accuracy, quality, availability, or utility of AI systems (e.g., falsely representing the level of human involvement, that the AI performs better than a human, is free from bias). The AG claims these practices can also violate California’s False Advertising Law. AI deployers and developers should ensure that advertisements relating to an AI product accurately represent the risks connected with the AI product.
- Deception: Using AI to foster or advance deception, such as by creating deepfakes, chatbots, or voice clones; failing to disclose AI was used to create media.
- Impersonation: Using AI to impersonate someone without consent, such as a person’s name, voice, signature, photograph, or likeness without consent.
- Other Unfair Practices: Using AI in a manner that is unfair, such as “using AI in a manner that results in negative impacts that outweigh its utility, or in a manner that offends public policy, is immoral, unethical, oppressive, or unscrupulous, or causes substantial injury.”
- Using AI in a manner that violates other laws.
California’s Competition Laws
The AG suggested AI can violate competition laws when, for example, AI systems set pricing or when a dominant AI company takes anticompetitive actions.
California’s Civil Rights Laws
The AG indicated that AI can violate California’s Unruh Civil Rights Act and Fair Employment Housing Act if the AI discriminates based on protected characteristics (like sex, race, religion, disability), including actions that have an adverse or disproportionate impact regardless of intent. AI developers also may have difficulty complying with laws that require explaining adverse actions, such as the federal Fair Credit Reporting Act and Equal Credit Opportunity Act, as well as the California Consumer Credit Reporting Agencies Act.
California’s Election Misinformation Prevention Laws
The AG’s office also warned against using AI for an unlawful election-related purposes, such as using undeclared chatbots to influence votes and using deepfakes to impersonate candidates.
Data Protection Laws
The AG’s office highlighted privacy-related rules. For example, AI must comply with the California Consumer Privacy Act (CCPA) requirements for transparency, honoring individual rights, and limiting processing to what is “reasonably necessary and proportionate.” The AG also suggests that AI can violate the California Invasion of Privacy Act (CIPA) where the AI is trained by recording or listening to private electronic communications or where, without consent, the AI system examines or records voiceprints to determine the truth or falsity of the statements. As we’ve previously noted, there has already been an explosion of CIPA private litigation seeking $5,000 per violation based on cookies and other online tracking technologies. Adding AI-related claims on top would only add fuel to the fire.
New California AI Laws
Finally, the AG’s office summarized several new AI laws that took effect on January 1, 2025. These new laws include new disclosure requirements regarding AI training datasets and use in telemarketing. Other rules require AI developers to make free and accessible tools available to detect if generative AI created certain content, more specific contract terms when creating likenesses with AI, expanded prohibitions and reporting requirements related to exploitative uses of AI, and requirements for supervision of AI tools in healthcare settings.
PART TWO: APPLICATION OF EXISTING CALIFORNIA LAW TO AI IN HEALTHCARE
The Health Advisory copies many of the same principles and warnings as the General Advisory, along with additional guidance on unlawful practices under health consumer protection laws, discrimination, and patient privacy and autonomy laws.
Unlawful Uses of AI in Healthcare
The AG identified several potential unlawful uses of AI, such as:
- Denying health insurance claims using AI or other automated decision-making systems in a manner that overrides doctors’ views about necessary treatment.
- Using generative AI or other automated decision-making tools to draft patient notes, communications, or medical orders that include erroneous or misleading information, including information based on stereotypes relating to race or other protected classifications.
- Determining patient access to healthcare using AI or other automated decision-making systems that make predictions based on patients’ past healthcare claims data, resulting in disadvantaged patients or groups that have a history of lack of access to healthcare being denied services on that basis while patients/groups with robust past access being provided enhanced services.
- Double-booking a patient’s appointment, or creating other administrative barriers, because AI or other automated decision-making systems predict that the patient is the “type of person” more likely to miss an appointment.
- Conducting cost/benefit analysis of medical treatments for patients with disabilities using AI or other automated decision-making systems that are based on stereotypes that undervalue the lives of people with disabilities.
California Professional Licensing Laws
While several state legislatures have enacted or proposed legislation to limit the use of AI to make clinical decisions, California has not yet done so. In the absence of such a law, the AG’s office explains that using AI or other automated decision tools to make decisions about patients’ medical treatment, or to override providers’ determinations about a patient’s medical needs, may violate California’s ban on the practice of medicine by corporations and other “artificial legal entities” (Bus. & Prof. Code, § 2400 et seq.). California’s corporate practice of medicine doctrine is one of the most comprehensive in the country, expressly prohibiting unlicensed persons and entities from holding themselves out as practicing medicine or controlling the practice of medicine. The applicability of the corporate practice of medicine doctrine to the use of AI raises interesting questions regarding the extent to which healthcare entities may rely upon AI, but the AG’s office does not believe these tools should be used to override physician decision-making.
S.B. 1120
The AG’s office highlighted California’s S.B. 1120, passed in the 2023-2024 legislative session, which limits healthcare service plans’ ability to use AI or other automated decision systems to deny coverage. This law emphasizes the need for human oversight over coverage decisions and places clear boundaries around the technology’s use by healthcare service plans.
Anti-Discrimination Laws
The AG’s office stressed that California’s anti-discrimination laws prohibit discrimination by an entity receiving “any state support,” including entities that provide healthcare (Gov. Code § 1113). The AG’s Office explained that these laws prohibit discriminatory practices likely to be caused by AI. Importantly, the AG’s office noted that even if AI is applied consistently to all patients, if the results are discriminatory, the AI technology will be considered discriminatory. For example, the AG’s office explained that it may be unlawful in California to determine patient access to healthcare using AI that makes predictions based on patients’ past healthcare claims data, where such data reflects historical disparities in access to care and therefore causes the AI to perpetuate such disparities.
Patient Privacy and Autonomy Laws
Finally, the AG’s office explained that it will use California’s state medical privacy and other laws to protect the privacy and autonomy of Californians. For example, the California Confidentiality of Medical Information Act (CMIA) requires regulated entities to preserve the confidentiality of patients’ medical information. The legal advisory notes that the CMIA is, in some respects, more stringent than federal health privacy laws, such as the Health Insurance Portability and Accountability Act. The AG’s office highlighted the importance of ensuring compliance with CMIA and limiting access and improper use of sensitive information, which may include the use of patient data to train AI.
The legal advisory also notes that certain California laws require physicians to provide information that a reasonable person in the patient’s position would need in order to provide informed consent to any proposed treatment (the laws cited in the legal advisory seem to relate only to mental health rehabilitation centers and acute care hospitals). Accordingly, providers subject to such laws should assess whether they should disclose the use of any particular AI tools to patients. In the context of research, the AG’s office does not go so far as to state that the use of AI in connection with any “medical experiment” subject to California’s Protection of Human Subjects in Medical Experimentation Act must always be disclosed. However, the legal advisory reminds stakeholders of the requirement under such legislation to provide the “Experimental Subject’s Bill of Rights” to participants in any such medical experiment, including information regarding study procedures and any drugs and devices used.
***
Our cross-practice team continues to closely monitor developments in AI. Reach out to one of the authors of this client alert or your regular McDermott lawyer to discuss the potential legal implications for your business.