Chapter 25.1 - Transparency in Frontier Artificial Intelligence Act

California Business and Professions Code — §§ 22757.10-22757.16

Sections (8)

Amended by Stats. 2025, Ch. 674, Sec. 1. (AB 853) Effective January 1, 2026. Operative August 2, 2026, pursuant to Section 22757.6.

As used in this chapter:

(a)“Artificial intelligence” or “AI” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
(b)“Capture device” means a device that can record photographs, audio, or video content, including, but not limited to, video and still photography cameras, mobile phones with built-in cameras or microphones, and voice recorders.
(c)(1) “Capture device manufacturer” means a person who produces

a capture device for sale in the state.

(2)“Capture device manufacturer” does not include a person exclusively engaged in the assembly of a capture device.
(d)“Covered provider” means a person that creates, codes, or otherwise produces a generative artificial intelligence system that has over 1,000,000 monthly visitors or users and is publicly accessible within the geographic boundaries of the state.
(e)“Digital signature” means a cryptography-based method that identifies the user or entity that attests to the information provided in the signed section.
(f)“Generative

artificial intelligence system” or “GenAI system” means an artificial intelligence that can generate derived synthetic content, including text, images, video, and audio, that emulates the structure and characteristics of the system’s training data.

(g)“GenAI hosting platform” means an internet website or application that makes available for download the source code or model weights a generative artificial intelligence system by a resident of the state, regardless of whether the terms of that use include compensation.
(h)(1) “Large online platform” means a public-facing social media platform, file-sharing platform, mass messaging platform,

or stand-alone search engine that distributes content to users who did not create or collaborate in creating the content that exceeded 2,000,000 unique monthly users during the preceding 12 months.

(2)“Large online platform” does not include either of the following:
(A)A broadband internet access service, as defined in Section 3100 of the Civil Code.
(B)A telecommunications service, as defined in Section 153 of Title 47 of the United States Code.
(i)“Latent” means present but not manifest.
(j)“Manifest” means easily perceived, understood, or recognized by a natural person.
(k)“Mass messaging platform” means a direct messaging platform that allows users to distribute content to more than 100 users simultaneously.
(l)“Metadata” means structural or descriptive information about data.
(m)“Personal information” has the same meaning as defined in Section 1798.140 of the Civil Code.
(n)(1) “Personal provenance data” means provenance data that contains either

of the following:

(A) Personal information.

(B) Unique device, system, or service information that is reasonably capable of being associated with a particular user.

(2)“Personal provenance data” does not include information contained within a digital signature.
(o)“Provenance data” means data that is embedded into digital content, or that is included in the digital content’s metadata, for the

purpose of verifying the digital content’s authenticity, origin, or history of modification.

(p)“System provenance data” means provenance data that is not reasonably capable of being associated with a particular user and that contains either of the following:
(1)Information regarding the type of device, system, or service that was used to generate a piece of digital content.
(2)Information related to content authenticity.

Added by Stats. 2025, Ch. 138, Sec. 2. (SB 53) Effective January 1, 2026.

This chapter shall be known as the Transparency in Frontier Artificial Intelligence Act.

Added by Stats. 2025, Ch. 138, Sec. 2. (SB 53) Effective January 1, 2026.

For purposes of this chapter:

(a)“Affiliate” means a person controlling, controlled by, or under common control with a specified person, directly or indirectly, through one or more intermediaries.
(b)“Artificial intelligence model” means an engineered or machine-based system that varies in its level

of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.

(c)(1) “Catastrophic risk” means a foreseeable and material risk that a

frontier developer’s development, storage, use, or deployment of a frontier model will materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars ($1,000,000,000) in damage to, or loss of, property arising from a single incident involving a frontier model

doing any of the following:

(A) Providing expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear weapon.

(B) Engaging in conduct with no meaningful human oversight, intervention, or supervision that is either a cyberattack or, if the conduct had been committed by a human, would constitute the crime of murder, assault, extortion, or theft, including theft by false pretense.

(C) Evading the control of its frontier developer or user.

(2)“Catastrophic risk” does not include a foreseeable and material risk from any of the following:
(A)Information that a frontier model outputs if the information is otherwise publicly accessible in a substantially similar form from a source other than a foundation model.
(B)Lawful activity of the federal government.
(C)Harm caused by a frontier model in combination with other software if the frontier model did not materially contribute to the harm.
(d)“Critical safety incident” means any of the following:
(1)Unauthorized access to, modification of, or exfiltration of, the model weights of a frontier model that results in death or bodily injury.
(2)Harm resulting from the materialization of a catastrophic risk.
(3)Loss of control of a frontier model causing

death or bodily injury.

(4)A frontier model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this

behavior and in a manner that demonstrates materially increased catastrophic risk.

(e)(1) “Deploy” means to make a frontier model available to a third party for use, modification, copying, or combination with

other software.

(2)“Deploy” does not include making a frontier model available to a third party for the primary purpose of developing or evaluating the frontier model.
(f)“Foundation model” means an artificial intelligence model that is all of the following:
(1)Trained on a broad data set.
(2)Designed for generality of output.
(3)Adaptable to a wide range of distinctive tasks.
(g)“Frontier AI framework” means documented technical and organizational protocols to manage, assess, and mitigate catastrophic risks.
(h)“Frontier developer” means a person who has trained, or initiated the training of, a frontier model, with respect to which the person has used, or intends to use, at least as much computing power to train the frontier model as would meet the technical specifications found in subdivision (i).
(i)(1) “Frontier model” means a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations.
(2)The quantity of computing power described in paragraph (1) shall include computing for the original training run and for any subsequent fine-tuning, reinforcement learning, or other material modifications the developer applies to a preceding foundation model.
(j)“Large

frontier developer” means a frontier developer that together with its affiliates collectively had annual gross revenues in excess of five hundred million dollars ($500,000,000) in the preceding calendar year.

(k)“Model weight” means a numerical parameter in a frontier model that is adjusted through training and that helps determine how inputs are transformed into outputs.
(l)“Property” means tangible or intangible property.

Added by Stats. 2025, Ch. 138, Sec. 2. (SB 53) Effective January 1, 2026.

(a)A large frontier developer shall write, implement, comply with, and clearly and conspicuously publish on its internet website a frontier AI framework that applies to the large frontier developer’s frontier models and describes how the large frontier developer approaches

all of the following:

(1)Incorporating national standards, international standards, and industry-consensus best practices into its frontier AI framework.
(2)Defining and assessing thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds.
(3)Applying mitigations to address the potential for catastrophic risks based on the results of assessments undertaken pursuant to paragraph (2).
(4)Reviewing assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally.
(5)Using third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks.
(6)Revisiting and updating the frontier AI framework,

including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures pursuant to subdivision (c).

(7)Cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer by internal or external parties.
(8)Identifying and responding to critical safety incidents.
(9)Instituting internal governance practices to ensure implementation of these processes.
(10)Assessing and managing catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms.
(b)(1) A large frontier developer shall review and, as appropriate, update its frontier AI framework at least once per year.
(2)If a large frontier developer makes a material modification to its

frontier AI framework, the large frontier developer shall clearly and conspicuously publish the modified frontier AI framework and a justification for that modification within 30 days.

(c)(1) Before, or concurrently with, deploying a new

frontier model or a substantially modified version of an existing frontier model, a frontier developer shall clearly and conspicuously publish on its internet website a transparency report containing all of the following:

(A) The internet website of the frontier developer.

(B) A mechanism that enables a natural person to communicate with the frontier developer.

(C) The release date of the frontier model.

(D) The languages supported by the frontier model.

(E) The modalities of output supported by the frontier model.

(F) The intended uses of the frontier model.

(G) Any generally applicable restrictions or conditions on uses of the frontier model.

(2)Before, or concurrently with, deploying a new frontier model or a

substantially modified version of an existing frontier model, a large frontier developer shall include in the transparency report required by paragraph (1) summaries of all of the following:

(A)Assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer’s frontier AI framework.
(B)The results of those assessments.
(C)The extent to which third-party evaluators were involved.
(D)Other steps taken to fulfill the requirements of the frontier AI framework with respect to the frontier model.
(3)A frontier developer that publishes the information described in paragraph (1) or (2) as part of a larger document, including a system card or model

card, shall be deemed in compliance with the applicable paragraph.

(4)A frontier developer is encouraged, but not required, to make disclosures described in this subdivision that are consistent with, or superior to, industry best practices.
(d)A large

frontier developer shall transmit to the Office of Emergency Services a summary of any assessment of catastrophic risk resulting from internal use of its frontier models

every three months or pursuant to another reasonable schedule specified by the large frontier developer and communicated in writing to the Office of Emergency Services with written updates, as appropriate.

(e)(1) (A) A frontier

developer shall not make a materially false or misleading statement about catastrophic risk from its frontier models or its management of catastrophic risk.

(B) A large frontier developer shall not make a materially false or misleading statement about its implementation of, or compliance with, its frontier AI framework.

(2)This subdivision does not apply to a statement that was made

in good faith and was reasonable under the circumstances.

(f)(1) When a frontier developer publishes documents to comply with this section, the frontier developer may make redactions to those documents that are necessary to protect the frontier developer’s trade secrets, the

frontier developer’s cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law.

(2)If a frontier developer redacts information in a document pursuant to this subdivision, the frontier developer shall describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction and shall retain the unredacted

information for five years.

Added by Stats. 2025, Ch. 138, Sec. 2. (SB 53) Effective January 1, 2026.

(a)The Office of Emergency Services shall establish a mechanism to be used by a frontier developer or a member of the public to report a critical safety incident that includes all of the following:
(1)The date of the critical safety incident.
(2)The reasons the incident qualifies as a critical safety incident.
(3)A short and plain statement describing the critical safety incident.
(4)Whether the incident was associated with internal use of a frontier model.
(b)(1) The Office of Emergency Services shall establish a mechanism to be used by a large frontier developer to confidentially submit summaries of any assessments of the potential for catastrophic risk resulting from internal use of its frontier models.
(2)The Office of Emergency Services shall

take all necessary precautions to limit access to any reports related to internal use of frontier models to only personnel with a specific need to know the information and to protect the reports from unauthorized access.

(c)(1) Subject to paragraph (2), a

frontier developer shall report any critical safety incident pertaining to one or more of its frontier models to the Office of Emergency Services within 15 days of discovering the critical safety incident.

(2)If a frontier developer discovers that a critical

safety incident poses an imminent risk of death or serious physical injury, the frontier

developer shall disclose that incident within 24 hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law.

(3)A frontier developer that discovers information about a critical safety incident after filing the initial report required by this subdivision may file an amended report.
(4)A frontier developer is encouraged, but not required, to report critical safety incidents pertaining to foundation models that are not frontier models.
(d)The

Office of Emergency Services shall review critical safety incident reports submitted by frontier developers and may review reports submitted by members of the public.

(e)(1) The Attorney General

or the Office of Emergency Services

may transmit reports of critical safety incidents and reports from covered employees made pursuant to Chapter 5.1 (commencing with Section 1107) of Part 3 of Division 2 of the Labor Code to the Legislature, the Governor, the federal government, or appropriate state agencies.

(2)The Attorney General or the Office of Emergency Services shall strongly consider any risks related to trade secrets, public safety, cybersecurity of a frontier developer, or national security when transmitting reports.
(f)A report of a critical safety incident submitted to the

Office of Emergency Services pursuant to this section, a report of assessments of catastrophic risk from internal use pursuant to Section 22757.12, and a covered employee report made pursuant to Chapter 5.1 (commencing with Section 1107) of Part 3 of Division 2 of the Labor

Code are exempt from the California Public Records Act (Division 10 (commencing with Section 7920.000) of Title 1 of the Government Code).

(g)(1) Beginning January 1, 2027, and annually thereafter, the Office of Emergency Services shall produce a report with anonymized and aggregated information about critical safety incidents that have been reviewed by the Office of Emergency Services since the preceding report.
(2)The Office of Emergency Services shall not include information in a report pursuant to this subdivision that would compromise the trade secrets or cybersecurity of a frontier developer, public safety, or

the national security of the United States or that would be prohibited by any federal or state law.

(3)The Office of Emergency Services shall transmit a report pursuant to this subdivision to the Legislature, pursuant to Section 9795, and to the Governor.
(h)The Office of Emergency Services may adopt regulations designating one or more federal laws, regulations, or guidance documents that meet all of the following conditions for the purposes of subdivision (i):
(1)(A) The

law, regulation, or guidance document imposes or states standards or requirements for critical safety incident reporting that are substantially equivalent to, or stricter than, those required by this section.

(B)The law, regulation, or guidance document described in subparagraph (A) does not need to require critical safety incident reporting to the State of California.
(2)The law, regulation, or guidance document is intended to assess, detect, or mitigate the catastrophic risk.
(i)(1) A frontier developer that intends to comply with this section by complying with the requirements of, or meeting the standards stated by, a federal law, regulation, or guidance document designated pursuant to subdivision (h) shall declare its intent to do so to the Office of Emergency Services.
(2)After a frontier developer has declared its intent pursuant to paragraph (1), both of the following apply:
(A)The frontier developer shall be deemed in compliance with this section to the extent that the frontier developer meets the standards of, or complies with the requirements imposed or stated by, the designated federal law, regulation, or guidance document until the frontier developer declares the revocation of that intent to the Office of Emergency Services or the Office of Emergency Services revokes a relevant regulation pursuant to subdivision (j).
(B)The failure by a frontier developer to meet the standards of, or comply with the requirements stated by, the federal law, regulation, or guidance document designated pursuant to subdivision (h) shall constitute a violation of this chapter.
(j)The Office of Emergency Services shall revoke a regulation adopted under subdivision (h) if the requirements of subdivision (h) are no longer met.

Added by Stats. 2025, Ch. 138, Sec. 2. (SB 53) Effective January 1, 2026.

(a)On or before January 1, 2027, and annually thereafter, the Department of Technology shall assess recent evidence and developments relevant to the purposes of this chapter and shall make recommendations about whether and how to update

any of the following definitions

for the purposes of this chapter to ensure that they accurately reflect

technological developments, scientific literature, and widely accepted national and international standards:

(1)“Frontier model” so that it applies to foundation models at the frontier of artificial intelligence development.
(2)“Frontier developer” so that it applies to developers of frontier models who are themselves at the frontier of artificial intelligence development.
(3)“Large frontier developer” so that it applies to well-resourced frontier developers.
(b)In making recommendations pursuant to this section, the Department of Technology shall take into account all of the following:
(1)Similar thresholds used in international standards or federal

law, guidance, or regulations for the management of catastrophic risk and shall align with a definition adopted in a federal law or regulation to the extent that it is consistent with the purposes of this chapter.

(2)Input from stakeholders, including academics, industry, the open-source community, and governmental entities.
(3)The extent to which a person will be able to determine, before beginning to train or deploy a foundation model, whether that person will be subject to the

definition as a

frontier developer or as a

large frontier developer with an aim toward allowing earlier determinations if possible.

(4)The complexity of determining whether a person or foundation model is covered, with an aim toward allowing simpler determinations if possible.
(5)The external verifiability of determining whether a person or foundation model is covered, with an aim toward definitions that are verifiable by parties other than the

frontier developer.

(c)Upon developing recommendations pursuant to this section, the Department of Technology shall submit a report to the Legislature, pursuant to Section 9795 of the Government Code, with those recommendations.
(d)(1) Beginning January 1, 2027, and annually thereafter, the Attorney General shall produce a report with anonymized and aggregated information about reports from covered employees made pursuant to Chapter 5.1 (commencing with Section 1107) of Part 3 of Division 2 of the Labor Code that have been reviewed by the Attorney General since the preceding report.
(2)The Attorney General shall not include information in a report pursuant to this subdivision that would compromise the trade secrets or cybersecurity of a frontier developer, confidentiality of a covered employee, public safety, or the national security of the United States or that

would be prohibited by any federal or state law.

(3)The Attorney General shall transmit a report pursuant to this subdivision to the Legislature, pursuant to Section 9795 of the Government Code, and to the Governor.

Added by Stats. 2025, Ch. 138, Sec. 2. (SB 53) Effective January 1, 2026.

(a)A large frontier developer that fails to publish or transmit a compliant document required to be published or transmitted under this chapter, makes a statement in violation of subdivision (e) of Section 22757.12, fails to report an incident as required by Section 22757.13, or fails to comply with its own frontier AI framework shall be subject to a civil penalty

in an amount dependent upon the severity of the violation that does not exceed one million dollars ($1,000,000) per violation.

(b)A civil penalty described in this section shall be recovered in a civil action brought only by the Attorney General.

Added by Stats. 2025, Ch. 138, Sec. 2. (SB 53) Effective January 1, 2026.

The loss of value of equity does not count as damage to or loss of property for the purposes of this chapter.