Overview
Deemed the Fourth Industrial Revolution, artificial intelligence (AI) has emerged as the newest watershed technological development of the 21st century, as evidenced by its near ubiquity in discussions of everything from how the information economy functions to the way art is made. The rapid development and deployment of AI technologies took society and governments by surprise.
The US government has since taken a variety of wide-ranging aspirational steps, but legislative activity has been absent. Recent actions by federal and state regulators offer insights into the priorities surrounding AI enforcement. Although these priorities, as well as the nature and scale of enforcement, may shift under the new Trump administration, recent enforcement actions demonstrate the tools available for AI enforcement.
In Depth
IN DEPTH
In the wake of the explosive emergence of ChatGPT, the Biden administration took a number of executive actions addressing AI but largely focused on policies and positions related to responsible AI use. In October 2023, President Biden issued an executive order for the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which underscored the administration’s commitment to the research, development, and deployment of responsible uses of AI. The executive order instructed the US Department of Commerce to develop guidance for content authentication and watermarking to clearly label AI-generated content and requires developers of AI models that implicate national security, national economic security, and public health and safety to share their safety test results and other critical information with the US government.
President Biden also formed the AI Safety Institute at the National Institute for Standards and Technology, which is part of the Commerce Department. Prior to his election, President-elect Trump pledged to repeal the AI executive order, stating that it “hinders AI Innovation” and that “Republicans support AI Development rooted in Free Speech and Human Flourishing.” Although it remains to be seen how a Trump administration will differ in its treatment of AI, enforcement actions focused on standard causes of action such as fraud and deceptive trade practices are likely to continue, even in the absence of new or more comprehensive federal regulation.
In Congress, Senator Chuck Schumer has led calls for a comprehensive legislative AI framework, although, to date, that has taken the form of legislative working groups and roadmap documents. (In May 2024, the Bipartisan Senate AI Working Group released a “[r]oadmap for artificial intelligence policy in the United States Senate,” stating that “existing laws, including related to consumer protection and civil rights, need to consistently and effectively apply to AI systems and their developers, deployers, and users.) Yet there has been no federal legislation.
There has, however, been substantial legislation passed on the state level. In June 2024, Colorado passed its Artificial Intelligence Act, the first state comprehensive framework addressing the development and deployment of AI, which requires developers of “high-risk artificial intelligence” to use reasonable care to protect consumers from the risks of “algorithmic discrimination.” In September 2024, California passed more than a dozen AI-related bills, including measures imposing disclosure requirements on AI companies regarding the datasets used to train their models, requiring healthcare providers to disclose when they use generative AI to communicate with patients, and imposing limits on how healthcare providers and insurers can automate their services. Since 2023, Indiana has created an AI task force, and Illinois, Louisiana, Texas, and West Virginia have established committees on AI. Additionally, New Hampshire has made it a crime to fraudulently use deepfakes and has created a cause of action.
In the absence of federal legislation and regulation specifically targeting AI, agency officials and prosecutors are regulating AI through the enforcement of existing laws in areas such as consumer protection, financial services, privacy, and civil rights. Enforcement activity thus far has been limited and consistently focused on the impact of AI on individuals, including abuses of AI to perpetrate fraud or deceive consumers.
In the past year, prosecutors have targeted crimes committed through or with the assistance of AI, rather than scrutinizing the development of the technologies themselves. There has been a particular emphasis on the ways in which AI has been implicated in unfair or deceptive practices and the types of fraudulent schemes commonly prosecuted by both local and federal law enforcement.
FTC ACTIONS
On September 25, 2024, the US Federal Trade Commission (FTC) announced a new law enforcement sweep called Operation AI Comply. Lina M. Khan, chair of the FTC, emphasized that “[u]sing AI tools to trick, mislead, or defraud people is illegal” and “there is no AI exemption from the laws on the books.” The initiative specifically targeted operations that either used AI or sold AI technology that could be used in deceptive and unfair ways, as evidenced by the enforcement actions included in the initial rollout of the initiative:
- DoNotPay, an online subscription service offering assistance with commercial and legal issues, held itself out to be “the world’s first robot lawyer” and settled charges for false or unsubstantiated performance claims. The settlement required the company to pay $193,000 and issue a notice to consumers who subscribed to the service between 2021 and 2023 warning them about the limitations of the law-related features of the service.
- Ascend Ecom, Ecommerce Empire Builders, and FBA Machine – three separate companies operating online business opportunity schemes – were charged by the FTC for making representations about their AI-powered business models as part of deceptive earnings claims. As a result of these complaints, federal courts temporarily halted each scheme and put the entities under the control of a receiver.
- Rytr, which marketed and sold an AI writing assistant that produced AI-generated reviews, settled charges for providing its customers with the means to generate false and deceptive written content for consumer reviews.
In November 2024, the FTC charged Sitejabber, a company offering an AI-enabled consumer review platform that “deceived consumers by misrepresenting that ratings and reviews it published came from customers who experienced the reviewed product or service, artificially inflating average ratings and review counts.” More recently, the FTC has negotiated settlements with two different companies involved in security systems – Evolv Technologies and IntelliVision Technologies Corp. Evolv was alleged to have made false representations about its AI-powered security screening system, claiming that it can detect weapons and ignore harmless personal items, including in sports stadiums, hospitals, and schools. Samuel Levine, the director of the FTC’s Bureau of Consumer Protection, underscored that representations about a technology’s abilities need to be especially accurate when they concern the safety of children. IntelliVision Technologies was alleged to have made deceptive claims about its facial recognition software – specifically that its AI-powered facial recognition software was free of gender and racial bias and that it had one of the highest accuracy rates on the market.
SEC ACTIONS
The FTC is not alone in its recently expressed commitment to investigating and applying existing regulatory frameworks to products and services involving or purporting to involve AI. On March 18, 2024, the US Securities and Exchange Commission (SEC) announced that it had settled charges against two investment advisers, Delphia (USA) Inc. and Global Predictions Inc., for making false and misleading statements about their purported use of AI. The firms agreed to settle the SEC’s charges and pay $400,000 in collective civil penalties. The SEC found that Delphia did not have the AI and machine learning capabilities that it claimed. It also found that Global Predictions falsely claimed to be the “first regulated AI financial advisor” and misrepresented that its platform provided “[e]xpert AI-driven forecasts.”
Gary Gensler, chair of the SEC, explained that the two companies had marketed that they were using AI in certain ways when they were not. Gensler referred to this type of misleading claim as “AI washing” and cautioned that companies are capitalizing on the buzz created by the new technologies and purporting to use them to lure and mislead investors and customers. On January 25, 2024, the SEC issued an Investor Alert “to make investors aware of the increase of investment frauds involving the purported use of artificial intelligence (AI) and other emerging technologies.”
STATE REGULATOR ACTIONS
Some state regulators have also addressed AI technology through enforcement actions. Most recently, the Texas Attorney General’s Office (AGO) announced a “first-of-its-kind” settlement with a healthcare generative AI company, Pieces Technologies. The settlement resolved allegations that the company had made a series of false and misleading statements about the accuracy and safety of its AI products. According to the Texas AGO, the settlement “highlights the potential for enforcement against AI companies under existing laws that are not specific to AI” and emphasized the importance of “exercising caution in developing claims about an AI product’s efficacy or performance.” This Texas case can also be seen as a warning to companies to understand what their vendors or contractors are doing with respect to AI. Although the Texas AGO targeted the technology provider itself – not the four major Texas hospitals that deployed the technology – the action remains of import to those who use advanced technologies.
Other state attorneys general have also announced or demonstrated a commitment to act on AI, especially using traditional consumer protection theories. In 2023, 23 state attorneys general, including both Democrats and Republicans, responded to the National Telecommunications and Information Administration (NTIA) request for comment on AI policies and urged NTIA to ensure that “AI systems are valid and reliable, safe, secure, and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair.”
AREAS OF FUTURE ENFORCEMENT
Regulatory activity by the FTC and other agencies also highlights areas of interest for future enforcement. Specifically, the FTC has issued information-seeking orders to eight companies offering surveillance pricing products and services that incorporate data about consumers’ characteristics and behavior. Surveillance pricing refers to the “opaque market” for products sold by third-party intermediaries that “claim to use advanced algorithms, artificial intelligence, and other technologies, along with personal information about consumers – such as their location, demographics, credit history, and browsing or shopping history – to categorize individuals and set a targeted price for a product or service.” The FTC has also expressed interest in AI investments and the partnerships between AI companies, and it has commenced a study of the industry, including information requests to a number of major technology companies.
Finally, the FTC has expressed interest in the risks of generative AI, which can include chatbots, deepfakes, and voice clones. The agency has urged individuals and companies to consider whether they should be making or selling a synthetic media or generative AI product given the potential opportunities for its misuse; whether all reasonable precautions to mitigate the risks of the product are being taken; whether the technology or product is misleading people about what they are seeing, hearing, or reading, particularly in the context of advertising; and whether the burden is on the consumer to detect AI-generated content. Individuals and companies operating in this space should carefully weigh the FTC’s non-exhaustive list of concerns presented by both the creation and implementation of AI technologies.
As of now, federal regulations aimed at AI have yet to be issued, but the enforcement actions described above and the interests telegraphed by the FTC and other regulators confirm that companies need to remain apprised of and vigilant about the risks posed by AI technologies, even as they capitalize on its innovative qualities. While the impact of a new but slim Republican majority in Congress remains unclear, the Trump administration is expected to approach regulatory enforcement of AI with a lighter touch. This is particularly true given the administration’s announcement of tech investor David Sacks as the White House “AI & Crypto Czar.” Nevertheless, bipartisan interest in the AI space and recent enforcement trends indicate that there will continue to be a focus on AI abuses under fraud and deceptive trade practices theories.