Introduction
California is arguably at the epicenter of the artificial intelligence (AI) revolution. In 2023, more than half of all global venture funding for AI startups went to companies in California. With deep ties to leading AI companies, research institutions, and policy labs, California has not only incubated the technology but is also actively shaping how it is governed. From regulating health care algorithms to combating sexually explicit deepfakes, California has spearheaded a legislative agenda for AI that other states may look to as they craft their own AI governance approaches.
Given California’s leadership on AI and its potential to influence the AI priorities of its peers, we wanted to explore how California’s approach to AI governance has evolved over time and what AI-related topics have recently gained traction in the California legislature. We find that California enacted a total of 18 AI-related laws in 2024 with focuses ranging from protection of individuals’ digital likeness to guidance on the safe use of AI in education. California’s 2024 AI laws build upon the state’s longer history of regulating technology. Looking to 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (Senate Bill 53) into law that establishes transparency requirements for advanced models. The California legislature also lined up several other AI-related bills, setting the stage for California to expand its patchwork of AI-related laws that govern specific sectors and issues.
See our accompanying CSET Emerging Technology Observatory (ETO) blog, which explores 8 of these 18 AI bills in AGORA, CSET’s living collection of AI-related laws, regulations, standards, and governance documents. Using AGORA’s thematic tags, we find that these laws emphasize themes related to accessing and exchanging information about AI systems, themes also present in the California bills that are currently moving through the legislature.
AI Laws Abound in 2024
In 2024, California enacted a total of 18 laws that govern AI. These laws cover six primary topics as shown below (Table 1).
Table 1. AI Laws Enacted in California in 2024
| Legislature | Date Approved | Date Effective | Topic |
|---|---|---|---|
| AB 2602 | September 17, 2024 | January 1, 2025 | Protections of individuals’ digital likeness and personal information |
| AB 1836 | September 17, 2024 | January 1, 2025 | Protections of individuals’ digital likeness and personal information |
| AB 2655 | September 17, 2024 | January 1, 2025 | Restrictions on AI-generated election content and sexually explicit material |
| AB 2839 | September 17, 2024 | September 17, 2024 | Restrictions on AI-generated election content and sexually explicit material |
| AB 2355 | September 17, 2024 | January 1, 2025 | Restrictions on AI-generated election content and sexually explicit material |
| SB 926 | September 19, 2024 | January 1, 2025 | Restrictions on AI-generated election content and sexually explicit material |
| SB 981 | September 19, 2024 | January 1, 2025 | Restrictions on AI-generated election content and sexually explicit material |
| SB 942 | September 19, 2024 | January 1, 2026 | Disclosures of AI use cases, training datasets, and AI-generated content |
| AB 2905 | September 20, 2024 | January 1, 2025 | Disclosures of AI use cases, training datasets, and AI-generated content |
| AB 2885 | September 24, 2024 | January 1, 2025 | Meaning of AI |
| AB 2013 | September 28, 2024 | January 1, 2025 | Disclosures of AI use cases, training datasets, and AI-generated content |
| AB 3030 | September 28, 2024 | January 1, 2025 | Checks on AI used for patient communication and other healthcare purposes |
| SB 1120 | September 28, 2024 | January 1, 2025 | Checks on AI used for patient communication and other healthcare purposes |
| AB 1008 | September 28, 2024 | January 1, 2025 | Protections of individuals’ digital likeness and personal information |
| SB 1288 | September 28, 2024 | January 1, 2025 | Guidance on the safe use of AI in education |
| AB 1831 | September 29, 2024 | January 1, 2025 | Restrictions on AI-generated election content and sexually explicit material |
| SB 896 | September 29, 2024 | January 1, 2025 | Disclosures of AI use cases, training datasets, and AI-generated content |
| AB 2876 | September 29, 2024 | January 1, 2025 | Guidance on the safe use of AI in education |
Some of these laws introduce new governance measures for AI, such as SB 1288, which requires the creation of new AI-related educational materials. Others amend or clarify the scope of existing legislation, such as AB 1008’s extension of enshrined privacy protections to cover AI systems (Box 1: Spotlighting AB 1008).
Box 1: Spotlighting AB 1008
| Summary |
| AB 1008 extends California’s enshrined privacy protections to AI systems. AI systems may be trained on and therefore divulge personal information, such as email addresses or health information, in response to user queries. AB 1008 affirms that such personal data is protected just like all other personal data under existing California law. |
| Protection offered by AB 1008 | Challenge raised by AB 1008 |
| Provides consumers with protections against large language models mining personal data for training | May require developers to alter the architecture of AI systems every time an individual requests their personal information to be scrubbed from an AI system |
As of writing, eight of these laws (SB 1120, AB 2885, AB 1831, SB 1288, SB 1381, AB 2013, AB 3030, and SB 896) have been annotated, or assigned thematic tags in AGORA. In [ETO blog title and link] we highlight how these eight laws emphasize transparency themes and leverage disclosures as a governing strategy. The presence of these themes indicates that the eight laws can help the California government and populace better understand how AI is being developed and used.
| 👉AGORA is a collection of AI-related laws, regulations, standards, and similar documents. Each document in AGORA is either an entire law or a thematically distinct, AI-focused portion of a longer text. An AGORA document includes metadata, summaries, and thematic codes developed through rigorous annotation and validation processes. Thematic codes are organized under a taxonomy that consists of several dimensions, including risk factors and governance strategies. |
In addition to the AI-related laws that were enacted in 2024, one California bill made headlines that same year. SB 1047 passed the California Assembly and Senate, but was ultimately vetoed by the governor and not enacted into law. This bill would have placed significant responsibilities on developers of powerful AI models including:
- Adopting comprehensive safety protocols
- Assessing potential risks
- Undergoing independent audits
- Reporting safety incidents
Failure to meet these responsibilities would have led to substantial fines for AI developers. In addition, SB 1047 would have established a Board of Frontier Models to provide oversight of and guide regulations for powerful AI models. Further, it created a consortium to develop a framework for CalCompute, a publicly funded cloud computing platform that would have expanded compute access for research and academic communities to support safe and equitable AI development. Finally, the bill would have granted the attorney general authority to enforce its provisions and protect whistleblowers working in the AI industry.
Most AI companies that would have been subject to the bill, including OpenAI, had major reservations regarding its provisions. They worried that the law would inhibit innovation, especially in the open-source AI community, by making it nearly impossible to comply once models are publicly released. On the other hand, some prominent AI researchers, academics, and tech workers supported the bill’s attempts to address catastrophic risks from AI systems, while other observers argued that the bill did not go far enough. Governor Newsom’s public rationale for vetoing the bill was that its focus on regulating large-scale, expensive AI models could create a false sense of security, leaving smaller but potentially dangerous models unregulated. However, the governor pledged to continue regulating AI. Senator Scott Wiener, the bill’s coauthor, called the veto a setback for AI accountability and warned that the decision left powerful developers essentially self-regulating in the absence of binding rules for AI.
Early Days of Technology Regulation
The batch of 2024 AI-related bills and laws did not emerge from a vacuum but instead built upon California’s long history of regulating emerging technologies, which dates back to the state’s founding in the 19th century.
In 1862, the state passed a law making it illegal to tap or intercept telegraph messages, with the goal of protecting the integrity and privacy of communications. This law is one of the earliest examples of technological regulation in California. Throughout the late 19th and early 20th centuries, California continued to pass laws regulating new scientific inventions, from hydraulic mining equipment during the gold rush to anti-smog devices in cars. By the 1970s, California had added rules to stop the “proliferation of computer crime” to the penal code. Around the same time, regulatory attention expanded to include consumer protection, electronic commerce, and digital privacy.
In the 21st century, AI has offered California its latest opportunity to push the regulatory frontier by passing targeted laws. For example, SB 1298, passed in 2012, permitted the testing of autonomous vehicles on public roads, as long as a licensed operator was present to assume control if needed. In 2018, SB 1001 amended the Business and Professions Code to prohibit the use of bots that intentionally misled people about their artificial identity online. That same year, the California legislature recognized the growing importance of AI by including the term “artificial intelligence” in a resolution expressing the legislature’s support for the 23 Asilomar AI Principles, a set of ethical guidelines for the research and development of AI.
Although AI was officially recognised by name in California legislation in 2018, the relevant technology-related issue that was ultimately placed on the ballot that year was privacy. The 2010s witnessed a number of high-profile cyberattacks and data breaches. A pair of Californians, inspired by their own experience of having their tax filings cloned and then claimed by someone else, began a ballot initiative movement for a law to protect consumer privacy. They managed to easily collect enough signatures to put the issue up for a vote but ultimately opted for the legislature to pass the issue as a bill. The California Consumer Privacy Act (CCPA) was enacted in 2018 and gave California consumers greater agency over how their personal information was used.
The CCPA primarily gives consumers the right to:
- Request information about data that a business has collected about them
- Request that this data be deleted
- Opt out of businesses selling their personal data
The CCPA also requires businesses to inform people about their data collection and implement reasonable measures to protect consumer data. By 2019, industry groups began lobbying for exemptions to key provisions of the law. Meanwhile, privacy advocates pushed for stronger protections, leading to the 2020 California Privacy Rights Act (CPRA) ballot initiative. This initiative passed with about 56 percent of the vote. It amended the CCPA by giving consumers greater control over their personal information and established a new enforcement body to uphold its consumer protections. Furthermore, the CPRA ensured that the CCPA could not be weakened by amendments from the legislature.
As it became apparent that AI systems would be trained on troves of data (some of it personal), the CCPA provided a readily adaptable foundation that could help address the privacy concerns that AI systems introduced. Although AI was not explicitly referenced in the CCPA, its privacy protections ultimately led to a California law enacted in 2024 that amended the CCPA to cover AI systems (Box 1: Spotlighting AB 1008).
AI Ascends
In September 2023, nearly one year after the launch of ChatGPT spurred discussions about AI’s new capabilities and risks, Governor Newsom signed Executive Order N-12-23 to examine the development, use, and risks of AI throughout the state and to create a principled process for integrating generative AI into the state government. The order asserts that “thoughtful, responsive governance at the beginning of a technology’s lifecycle can maximize equitable distribution of the benefits, minimize adverse impacts and abuse by bad actors, and reduce barriers to entry into emerging markets,” illustrating the delicate balance between staying at the forefront of new AI developments while minimizing risks. The executive order contains seven directives that instruct state agencies to study the use and impact of generative AI, set up AI risk analysis procedures, and outline pilot generative AI use cases.
For example, the first directive requires the Government Operations Agency, the California Department of Technology, the Office of Data and Innovation, and the Governor’s Office of Business and Economic Development to submit a report to the governor that identifies promising generative AI use cases in the state government and analyzes associated risks. The order was a massive turning point, marking California’s commitment to managing AI use in the state government.
During this time, the California legislature also began to take steps to ensure the safety of the state’s AI systems. In October 2023, the legislature passed one of the first California AI-related laws, AB 302, which requires the Department of Technology to inventory high-risk automated decision systems used by state agencies.
California’s AI-related legislative activity in 2024 therefore builds upon the state’s long history of regulating emerging technology issues, which spans governing telegraph messages to enshrining consumer privacy protections and studying the use of generative AI in the state government.
What Comes Next
This year, a number of AI-related bills have worked their way through the California legislature. We highlight three of these 2025 bills that vary in scope and topic: SB 53, AB 412, and AB 1064. In October, SB 53 became law and AB 1064 was vetoed.
Passed in September, SB 53 is a pared-down version of SB 1047. After Governor Newsom vetoed SB 1047, he convened a policy working group on AI frontier models. The main differences between SB 53 and SB 1047 derive partly from the policy group’s report. SB 53 shifts focus from strict safety requirements to greater transparency (such as requiring developers to share their safety protocols publicly). See Table 3 in the accompanying CSET ETO blog for more.
Proposed bill AB 412 also addresses transparency measures, but in the context of empowering actors harmed by AI copyright infringement. It requires developers of generative AI models to track, document, and disclose whether any “covered material” was used in training their models and to respond to copyright owners’ information requests, as well as to facilitate fingerprinting tools to identify potential infringement. AB 412 has been stalled until 2026 (see Table 4 in the accompanying CSET ETO blog for more details).
Proposed bill AB 1064 is more narrowly scoped than SB 53 or AB 412. AB 1064 targets teenage suicides linked to chatbots. It prohibits developers from creating AI chatbots intended for use by children if they pose risks of emotional or psychological harm, and it restricts the use of children’s personal data in training AI models without proper consent.
As these bills work their way through the legislature, the California Civil Rights Council finalized new regulations on the use of AI and automated decision-making systems in employment in June 2025. Employers must now:
- Retain AI-related employment records for four years
- Prove that their tools do not produce discriminatory outcomes
- Ensure that assessments do not unlawfully reveal medical or disability-related information
- Remain accountable when using third-party AI tools that affect employment decisions
Earlier this year, California’s attorney general also provided clarity on how current laws apply to AI. In January, the attorney general issued a legal advisory that provided an overview of existing consumer protection, civil rights, competition, and data privacy laws that apply to entities that develop, sell, or use AI (such as the CCPA, which AB 1008 amended to cover AI systems). Shortly after, he issued a second advisory that focused on AI in healthcare.
Big Picture
California has a long history of regulating new technologies, from governing telegraph messages to enshrining consumer privacy protections. As states attempt to establish guardrails for AI absent clear federal guidance, California has again positioned itself at the forefront. As more bills move through the California legislature in 2025, a more coherent picture of how to effectively harness AI may emerge from California’s AI governance experiment, providing a potential model for other states and the federal government.