By Pillsbury Winthrop Shaw Pittman, LLP
•
March 4, 2025
California leads the way with new AI laws promoting transparency, privacy and ethical practices across various industries. January 1, 2025, marked the start of a series of significant AI laws going into effect in California. California’s 18 new AI laws represent a significant step toward regulating this space, establishing requirements regarding deepfake technology, AI transparency, data privacy and use of AI in the health care arena. These laws reinforce the state’s desire to be a pioneer in this space. This article provides a detailed look at the enacted legislation, addresses compliance timelines and serves as a guide for businesses as they navigate compliance with California’s evolving AI landscape. New California Laws Enacted to Keep AI in Check California’s new laws seek to keep AI in check across a wide range of industries, including social media, entertainment, health care, elections and more. Of the 38 AI bills that were sent to the California Governor Gavin Newsom, 18 were signed into law. In this article, we highlight both broad and industry-specific laws that may impact businesses, such as mandates for AI transparency, consumer data protections, safeguards against misuse of AI in media and health care, and the establishment of mechanisms to address emerging concerns such as neural data privacy and deceptive content in elections. These laws, including amendments to the California Consumer Privacy Act (CCPA) and specific requirements for AI training data, generative AI (gen AI) disclosures and content labeling, impose new compliance obligations that span a range of sectors, demanding significant operational and technological adjustments. Generally, enforcement from state agencies will come in the form of informal inquiries and formal enforcement actions seeking injunctive relief, fines and, in some cases, criminal penalties. While some laws permit a private right of action, most of the laws focus on state oversight to keep AI transparent and protect the public from misuse. Of the 18 laws signed into law, SB 926, AB 1836, AB 2655 and AB 2839 include a private right of action, whereas the remaining laws are either silent on this issue or explicitly prohibit it. In this update, we break down California’s latest AI laws with accessible summary charts covering each law’s code, key details, effective dates, and must-know deadlines along with actionable steps to help with your compliance program. We also spotlight notable bills that Gov. Newsom vetoed. Legislation Enacted into Law General Enacted AI Legislation There are two critical general AI laws that were enacted in this session: AB 2885 and AB 2013. AB2885 establishes a standard definition of AI, while AB 2013 requires documentation regarding data used by developers to train the gen AI system or service. According to AB 2885, “Artificial intelligence” is defined as an “engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.” AB 2013 imposes a host of new compliance obligations on developers, as it requires developers to post a high-level summary of the datasets used to train the generative AI system or service on their website. Social Media, Politics and Entertainment AI Legislation Eight new laws fall into this category. They encompass laws intended to protect performers’ rights, prohibit non-consensual deepfake pornography, and extend the laws addressing child sexual abuse materials to AI-generated materials and the use of deceptive AI-generated content in the political context. AB 1831, effective January 1, 2025, expands the scope of existing child pornography laws to include content that is digitally altered or generated by AI systems. This law overlaps with SB 926. SB 926, effective January 1, 2025, criminalizes the creation and distribution of non-consensual deep fake pornography in California. It specifically prohibits distributing realistic deep fake intimate images without consent if the distributor knew or should have known it would cause serious emotional distress. It applies to individuals and businesses in California involved in distributing such images, except those under 18. Victims have a private right of action, allowing them to sue for damages if their images are shared without consent. Enforced by the California Attorney General, violations range from civil penalties, fines or criminal charges, depending on the offense’s severity. SB 981, effective January 1, 2025, requires social media platforms in California to establish reporting tools for users to report cases of sexually explicit digital identity theft. It defines “sexually explicit digital identity theft” as unauthorized, digitally altered images or videos of a person that depict intimate acts or body parts in a way that appears authentic. Platforms must temporarily hide reported content from public view, confirm receipt of the report within 48 hours and provide a status update within seven days. Platforms must complete their assessment within 30 days, extendable to 60 days in certain cases. While the law does not specify penalties or civil liability for noncompliance, failure to meet its requirements could result in legal challenges. AB 2602, effective January 1, 2025, protects individuals from unauthorized use of their digital replicas in personal or professional service contracts. The law applies to new performances, fixed on or after January 1, 2025, allowing digital replicas of a person’s voice or likeness, making such provisions unenforceable if they replace live performances, lack specific usage descriptions, and if the individual was not represented by legal counsel or a union. The enforcement will be by the Division of Labor Standards Enforcement within the Department of Industrial Relations under the direction of the Labor Commissioner. AB 1836, effective January 1, 2025, restricts the use of digital replicas of deceased personalities for commercial purposes without prior consent from their estate, with protections applying retroactively. Violators may be liable for $10,000 or the actual damages suffered, and enforcement is handled through civil litigation. Exceptions to the consent requirement include uses for news, satire, scholarship, documentaries, fleeting appearances or in specific advertisements. The law provides a private right of action, allowing estates to protect a deceased individual’s likeness for up to 70 years after their death. AB 2655, effective January 1, 2025, mandates that large online platforms block or label “materially deceptive” election-related content, particularly deep fakes that could harm a candidate’s reputation or election chances. The law applies to platforms with over one million California users and requires rapid removal of flagged deceptive content within 72 hours, along with labeling tools for identifying false or inauthentic content leading up to elections. Exemptions apply to satire, parody and certain media publications. Candidates, officials and California’s Attorney General may seek injunctive relief, and candidates depicted in deceptive content can file lawsuits for damages. This law, like AB 2839, has been challenged in court, and is currently subject to a stipulated stay of enforcement. AB 2839, effective September 17, 2024, regulates deceptive AI-generated content in election advertisements in California. It prohibits distributing “materially deceptive” content likely to harm a candidate’s reputation or electoral chances, with exceptions for satire, parody and candidates portraying themselves if properly disclosed. The law applies this prohibition within 120 days of an election in California and, in specified cases, 60 days after an election. Candidates and election officials can file for injunctions and seek equitable relief against those distributing misleading content, though the law does not provide for damages. Following a legal challenge, a federal judge substantially limited the law’s scope, allowing only the audio message disclosure requirement to stand, while blocking other provisions due to First Amendment concerns. AB 2355, effective January 1, 2025, mandates clear disclosures on political ads generated or significantly altered by AI, aiming to prevent undisclosed AI use that could mislead voters. This law applies to a specific subset of political ads involving AI-generated or modified images, audio or video, covering ads related to federal, state, or local candidates and ballot measures. Disclosures must state, “Ad generated or substantially altered using artificial intelligence” in a clear format appropriate to the ad’s medium. The law is enforced by the Fair Political Practices Commission, which can impose fines up to $5,000 per violation, though it doesn’t grant a private right of action. Exemptions are provided for genuine news organizations, satire, parody and live news coverage with proper disclosure. Health Care Services—AI Legislation Three new laws regulate the use of AI in connection with health care services, communicating with patients, making medical decisions, and protecting neural and biological data privacy. AB 3030, effective January 1, 2025, requires health care providers using gen AI for patient communications to include a disclaimer indicating AI involvement and instructions for contacting a human health care provider. This law applies to hospitals, clinics and physician offices that use AI to communicate clinical information. Exemptions apply to communications reviewed by a licensed human health care provider. Enforcement falls under the Medical Board of California and the Osteopathic Medical Board, with no private right of action specified. SB 1120, effective January 1, 2025, mandates that only physicians, not AI systems, can make final decisions regarding medical necessity in health insurance utilization reviews. While AI can support administrative tasks, it cannot independently determine medical necessity. Health insurers and health care plans must also disclose when AI is involved in these processes. The California Department of Managed Health Care enforces the law, with penalties for noncompliance, but there is no private right of action. Data Privacy AI Legislation Three other laws address the intersection of AI and data privacy, clarifying that AI-generated data is treated as personal information, requiring disclosures about AI-generated content, and regulating calls involving AI. SB 1223. As neurotechnology advances, Colorado and California have introduced laws to protect neural and biological data privacy. Colorado’s law, effective August 2024, adds protections for “biological” and “neural data” under the Colorado Privacy Act, applying to businesses with large-scale data processing in the state. Similarly, California’s law, effective January 1, 2025, amends the CCPA to categorize neural data as sensitive personal information, with new limits on its use. Both laws require businesses to obtain consent before processing neural data and provide opt-out options for consumers, especially for advertising or profiling purposes. Enforcement will be handled by each state’s attorney general, with penalties for noncompliance, though neither law provides a private right of action for individuals. AB 1008, effective January 1, 2025, updates the CCPA to clarify that AI-generated data is treated as personal information. This law applies to businesses using AI systems capable of generating or processing personal data, requiring them to give consumers the same rights for AI data as for other personal information. California’s AB 1008 acknowledges that AI can create personal data by learning from existing information, mentioning real people in its responses, or guessing details about them. The law ensures that any personal data AI generates is protected just like regular personal information. The California Attorney General and California Privacy Protection Agency will enforce the law, with penalties including civil fines based on violation severity. Although AB 1008 does not grant a separate private right of action, consumers can still sue for data breaches under CCPA protections. SB 942, effective on January 1, 2026, requires “covered providers,” as defined, to provide users with tools to identify AI-generated content that is clear and conspicuous. To comply, these covered providers must offer a free AI detection tool that allows users to assess whether an image, video, audio or a mix of these was made or changed by the provider’s AI system. The law also requires companies to give users the option to add a clear and noticeable label to the images, videos, audio or any mix of these if they were created or altered by the company’s AI. This label must be easy to understand and suited to the type of content. Violations can incur civil penalties of $5,000 per violation, enforceable by the Attorney General or local authorities, although there is no private right of action included. AB 2905, effective January 1, 2025, regulates the use of automatic dialing-announcing devices with artificial voices in California. It applies to telecommunications companies and any entity using prerecorded messages generated or altered by AI for phone calls. To comply, businesses must first notify the recipient with a natural voice that an artificial voice will follow, along with details on the call’s nature, and provide contact information. Consent is required before playing the message. The California Public Utilities Commission enforces the law, with criminal penalties for violations, though there is no private right of action. Government and Education AI Legislation Three laws address the use of AI by government and in schools. SB 896, also known as the “Generative Artificial Intelligence Accountability Act,” regulates California state agencies’ use of gen AI. It requires the Office of Emergency Services to assess gen AI’s risks to critical infrastructure, including potential mass casualty events, with annual reports to the Legislature. Agencies must disclose AI-generated communications and provide human contact options. The Department of Technology must annually update the governor on gen AI developments to ensure transparency, accountability and public safety. AB 2876. The Instructional Quality Commission is tasked to incorporate Model Library Standards, which include media literacy and AI literacy into California’s K-12 curriculum during its next revision after January 1, 2024. They must also consider including media literacy content in the mathematics, science and history-social-science instructional materials when they are adopted January 1, 2025. SB 1288 requires the Superintendent of Public Instruction to establish a working group focused on the safe and effective use of AI in public schools. The group must develop guidance by January 1, 2026, and model policies by July 1, 2026, addressing academic integrity, data privacy and equity. They are also tasked with evaluating current and future AI developments in education. A final report with findings and recommendations is to be submitted to the Legislature by January 1, 2027. Vetoed AI Legislation AB 1949, which was vetoed on November 28, 2024, aimed to restrict the use of minors’ data by requiring parental or self-consent for data processing and would have empowered the California Privacy Protection Agency (CPPA) to regulate such data use. The bill would have strengthened privacy protections for anyone under 18 by stopping businesses from collecting, using, sharing or selling their personal data without permission. Teens aged 13 to 17 would have been required to give their own consent, while kids under 13 would have required a parent or guardian to give consent. Another bill, SB 1047, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, also faced veto. If enacted, SB 1047 would have held tech companies legally responsible for harms caused by AI models. It would have also required them to implement a “kill switch” to disable AI systems if they were misused or became uncontrollable. Both bills were designed to enhance protections in emerging technology fields, particularly around data privacy and the safe use of AI. Conclusion/Next Steps California’s new AI laws set regulations for artificial intelligence use in various industries, potentially setting a precedent for other states. These enacted laws aim to protect consumers and strengthen data privacy measures. They further emphasize California’s commitment to safeguarding consumer interests. As compliance obligations commenced in January 2025, companies operating in California must act quickly to ensure that appropriate compliance controls are implemented. Companies should also anticipate that this is only the beginning, and that subsequent legislative sessions, both within and outside California, will likely spawn even more AI legislation.