Open Letter

Oppose OpenAI Writing Its Own Regulations in California

A coalition of child safety advocates, civil society groups, and tech policy organizations calling on OpenAI to withdraw its ballot initiative and stop pushing it forward as a standard for California or Congress.

Sign the Letter
March 18th, 2026
OpenAI
3180 18th Street
San Francisco, California 94110

RE: Parents & Kids Safe AI Act Ballot Initiative

To the Chief Executive Officer and relevant representatives of OpenAI:

We, the undersigned organizations — representing a diverse coalition of child safety advocates, civil society groups, and technology policy organizations across the country — urge you to withdraw the Parents & Kids Safe AI Act ballot initiative completely, to dissolve your 2026 ballot measure committee entirely as the legislature works to enact a framework that provides true and meaningful chatbot safety protections for children that this initiative does not, and to refrain from further attempts to bill this deeply flawed proposal as a potential national standard.

Though the text of this initiative claims to further both children's safety and California's leadership in AI, it does neither. Instead, it would encase California's child safety policy in amber for a generation — locking in protections too narrow to cover the harms children are actually experiencing, making it harder for victims and their families to seek justice, and establishing the dangerous precedent that the companies responsible for these harms can write the rules that govern them. While our organizations represent a dynamic array of differing perspectives on this policy matter, we all agree on several principles for chatbot legislation that the language of this initiative falls short of meeting: Legislation should not be closing avenues for legal recourse when children experience harm from AI. Legislation of this kind needs to account for risks we already know to be real and should not exempt large swaths of harm. And legislation passed on this issue should be able to be easily revisited and should not handcuff any future legislature from their ability to do so.

It is worth remembering why children's safety and AI is an urgent issue — and why OpenAI, in particular, cannot be trusted to regulate itself through this ballot proposal or legislation based on its text. Just last year, in April 2025, sixteen-year-old Californian Adam Raine took his own life after months of conversations with OpenAI's ChatGPT during which the chatbot discouraged him from seeking help from his parents, helped him research methods of suicide, offered to write his suicide note, and — in their final exchange — gave him what amounted to a pep talk before he hanged himself. OpenAI's own monitoring systems had flagged Adam's conversations repeatedly, and the company did nothing. In a similar example, ChatGPT helped push a 30-year-old Wisconsin man named Jacob Irwin into an acute mental health crisis. Jacob used ChatGPT for cybersecurity work and developed an AI delusion disorder after the chatbot's addictive, deceptive, and sycophantic design convinced him that he had discovered a time-bending theory to travel faster than light. Jacob's psychiatric condition was so severe that he required more than 60 days in a treatment facility. Adam and Jacob are not edge cases.

At least seven families have now filed suit against OpenAI over teen suicides and psychiatric hospitalizations linked to ChatGPT. OpenAI itself has disclosed that more than one million users per week engage with ChatGPT specifically about suicidal thoughts. These tragedies were not unforeseeable accidents. They were the predictable consequence of a company that dramatically compressed safety testing for GPT-4o to meet a launch deadline, that is alleged to have stripped out safeguards which would have automatically terminated conversations involving suicidal ideation, and that added features designed to maximize engagement and emotional dependency — knowing full well that children would be using the product. A company with this record does not get to write the rules. And a ballot initiative is not the place to let them try.

The specific provisions of this initiative bear out the concern that it was designed to protect OpenAI, not children. The definition of “severe harm” — the standard on which the entire initiative turns — is limited to significant physical injury from suicide, self-harm, or threats of violence. It excludes the mental health harms, eating disorders, psychosis, and psychological manipulation that researchers, clinicians, and parents are sounding the alarm about.

The initiative bars enforcement under California's Unfair Competition Law, stripping the Attorney General and city attorneys of one of their most effective tools. It contains a novel definition of “encrypted user content” broad enough to allow companies to shield the very chat logs that have been the best evidence of harm. It prevents auditors from examining even anonymized conversations between chatbots and minors. It bars parents and injured children from bringing claims under any of its new provisions.

And, if passed by ballot proposal, it would restrict the legislature's ability to fix any of these problems by limiting changes to those which “support economic progress” and requiring a two-thirds supermajority for later amendments. These are not the hallmarks of a child safety measure. They are the hallmarks of intentionally ineffective self-regulation.

OpenAI's Chief Global Affairs Officer has described this initiative as a model for the nation — one that should spread beyond California to other states and even the federal level. It is clear from the language of the initiative that OpenAI's interest is not in protecting kids through this initiative, but instead in protecting itself from liability, smothering its competitors, and dodging accountability by writing its own rulebook. We must get protections for our children right — especially in a state that has been a leader on tech policy. That is why it would be so damaging to enshrine an industry-drafted framework that prioritizes liability shields over the safety of children into state law. Following this letter, we have provided an exhaustive analysis of the issues contained in this initiative.

To this end, the legislature is actively working on comprehensive child AI safety legislation that we plan to actively engage on and we hope that our analysis below can help inform. The legislature should be given the space to do that work through the transparent, amendable, and publicly accountable legislative process — not preempted by an industry-funded ballot initiative that, by design, will be nearly impossible to update as this technology evolves and new harms emerge.

If California wants to lead on AI safety while still incentivizing innovation, it can look to policy proposals in states across the country that actually hold AI companies accountable, require sensible risk assessment and mitigation, and address the rapidly growing harms of companion AI.

We urge OpenAI to withdraw this initiative, to completely dissolve its ballot measure committee, and to engage constructively with the legislative process on protections that will actually protect California families. The families who have lost children to your products deserve nothing less.

Sincerely,

cc:
The Honorable Gavin Newsom, Governor, State of California
The Honorable Monique Limón, Senate President pro Tempore, California State Senate
The Honorable Robert Rivas, Speaker, California State Assembly

Signed By

Organizations standing together for real child safety protections

All Girls Allowed
Breaking Generational Cycles
California Initiative for Technology and Democracy
California Survivor Coalition
Center for Countering Digital Hate
Center for Digital Democracy
Center for Humane Technology
Children Now
Consumer Attorneys of California
Consumer Federation of America
Courage California
David's Legacy Foundation
Design It For Us
Distraction-Free Schools CA
Electronic Privacy Information Center (EPIC)
Encode AI
Fairplay
Hearts and Bags
Kapor Center Advocacy
Lynn's Warriors
Mothers Against Media Addiction (MAMA)
National Center on Sexual Exploitation (NCOSE)
NH Traffick Free Coalition
NTEN
Parents RISE!
ParentsSOS
Reflective Spaces Ministry, Corp
SAVE – Suicide Awareness Voices of Education
Schools Beyond Screens
Street Grace
Survivor Leader Network of San Diego
The Carson J. Bride Effect
The Tech Oversight Project
The Young People's Alliance
World Without Exploitation

Detailed Analysis

An exhaustive look at the substantive issues within OpenAI's ballot initiative.

1
Scope: Designed to Burden Competitors, Not Protect Kids

OpenAI has included a broad range of ineffectual audit and transparency requirements that do not provide meaningful protections for young users. Instead they are designed to insulate OpenAI from competition by increasing the regulatory burden on new companies while protecting OpenAI from liability. This is reinforced by the Attorney General having the ability to issue regulations on a variety of topics under the initiative, but not having the ability to change the definitions of covered providers or systems, locking this fact into California law.

When combined with language restricting liability under the initiative it is clear that these provisions are intended to burden OpenAI's competitors with meaningless paperwork that is trivial for well-resourced companies to comply with while providing OpenAI new legal protections.

Relevant Provisions: 22601 c(1) and c(2), 22601 (n), 22604.1, especially 22604.1 (9)

2
Binding Approach Handcuffs Future Legislatures

The initiative intentionally limits a future legislature's ability to make subsequent changes to the proposed statutory changes by requiring a two-thirds vote of members in either house. This restrictive framework — in addition to the initiative findings stating the people of California declare this to be a “responsible and balanced approach that protects children and families and supports economic progress that helps our state thrive” — will all but guarantee both endless legal challenges for any attempts to revisit core components and will lead to limited judicial interpretation.

For example, should this initiative pass and the legislature later decide there is a need to expand a core definition, the legislature will first have to contend with mustering the votes for two-thirds passage. And from there, should the legislation pass, companies could contend in court that the changes were not consistent with and/or further the purposes of the initiative.

Additionally concerning is the initiative's decision to rename the entire body of existing statute on Companion Chatbots to fit the name of this initiative. While seemingly innocuous, the legal intent here appears to be to intertwine the preferred framework of the initiative with existing law such that it would make it increasingly more difficult to extricate any future changes to statute that were originally contemplated by the legislature.

Relevant Provisions: Sec 2. Findings, paragraph b; Section 4, paragraph b

3
“Encrypted User Content” Could Block All Enforcement

The initiative's definition of “encrypted user content” is unique — we could not find it in any other state or federal statute — and it contains drafting choices that could effectively limit the ability to enforce the initiative at all against a savvy defendant.

At first glance, this provision reads as if it only prevents a provider from being forced to break its cryptographic protections to comply with the law. But on closer inspection, it is far more broad. The language does not say that encrypted user content is content that cannot be accessed without breaking encryption — it says that it is content that the provider cannot access “without notice to the user or customer.”

If a company chooses to add a provision to their terms of service which says that before accessing a user's information for any safety or security reason they must first notify the user, then it is plausible that nothing in this law could require companies to look at user conversations at all or provide them to the attorney general under this chapter. This provision alone could make effective enforcement against companies for large categories of wrongdoing effectively impossible.

Relevant Provisions: Section 22601 (h)

4
Excludes California’s Unfair Competition Law Enforcement

The initiative states that violations of the proposed statutory changes do not constitute a basis for claims under California's Unfair Competition Law (UCL). This is a very unusual and atypical narrowing of one of the most important tools in AGs' toolkits, and it has the effect of limiting the scope of enforcement given that City Attorneys are also able to enforce the UCL.

The UCL also has higher penalties per violation than those prescribed in this initiative, and allows AGs to utilize their subpoena power to investigate suspected wrongdoing. We know from recent examples — like San Francisco City Attorney David Chiu's enforcement actions — that cities are sometimes well equipped to identify and address highly concerning issues of this kind. Relying entirely on the resource-limited AG to enforce the key provisions of this initiative is a significant problem.

Relevant Provisions: 22605 (d)

5
Definition of “Severe Harm” Is Far Too Narrow

The definition of “severe harm” — the standard by which the entire initiative hinges — is incredibly narrow, covering only “significant physical injury due to suicide, attempted suicide, self-harm, or threats of violence.”

Despite increasing reports and studies outlining that chatbots can cause or worsen eating disorders, severe psychosis, compulsive use, and other mental health harms, none of this would be covered. The entire definition is specific to physical injury and does not adequately account for the wide range of negative health impacts and psychological manipulation that parents, youth advocacy organizations, and researchers are sounding the alarm on.

This definition, in combination with the intentional hamstringing of a future legislature's ability to amend these definitions, means that this ballot initiative would effectively lock in an extremely limited definition of harm that excludes many of the most severe negative impacts to kids from this technology.

Relevant Provisions: 22601 (q)

6
Unclear Definition of “Violations” in Enforcement

The initiative states that the attorney general may seek damages up to “one thousand dollars ($1,000) per violation for failure to implement or maintain required safeguards.” But what does one violation mean in this context? It is not defined.

If you fail to implement age verification measures, and 1 million children access the chatbot, is the violation $1,000 or is it $1 billion? These are incredibly important policy conversations that warrant additional and public discussion. Compare to violation provisions in the CCPA, which spell out that violations are calculated on a per-user basis. This gap in clarity is also particularly high-stakes given that the AG is given no ability to recover actual damages caused by companies' behavior, only fines.

Relevant Provisions: 22605 (b)(2)

7
Auditors Barred From Examining Any Minor Communications

It makes sense to have safeguards to protect children's privacy, but barring auditors from ever looking at logs of minors' real-world communications — even in an anonymized form — prevents them from understanding to what extent a company's safeguards are actually working.

When combined with the very broad definition of “encrypted user information” discussed earlier, it is extremely difficult for regulators or auditors to ever gain access to real chat logs that have provided the best evidence for identifying when severe harms occur and where systematic failures may be happening.

Relevant Provisions: 22604.2 (B)

8
Parents and Kids Can’t Sue Under New Provisions

The initiative intentionally bars parents and children who suffer injuries as a result of a chatbot causing the severe harms contemplated in the text from being able to pursue legal recourse using this chapter. Instead, injured parties must rely on the Attorney General to pursue a claim, and are limited to violations of existing law relative to simple notification, disclosure, and annual reporting requirements created in SB 243.

Even more concerningly, by referencing parts of SB 243 into the initiative (in addition to renaming the entire chapter SB 243 created after the initiative) it will make all of these provisions extremely hard to change. The way this initiative is structured repeatedly puts strong barriers in front of the Governor's goal of building on the progress of SB 243.

Relevant Provisions: 22605 (a)

9
Data Sale Ban Only Applies to Large CCPA Businesses

The initiative's prohibition on the sale of youth data is only applicable to businesses that meet the relevant thresholds of California's Consumer Privacy Act (CCPA). The CCPA applies to for-profit businesses that do business in California and meet any of the following:

  • Have a gross annual revenue of over $25 million
  • Buy, sell, or share the personal information of 100,000 or more California residents or households
  • Derive 50% or more of their annual revenue from selling California residents' personal information

As such, this prohibition is both overinclusive and underinclusive and fails to take into account the structure of many chatbot providers. It is concerning that if a business makes $20M annually, buys/sells/shares personal information of 99,999 Californians, and derives 49% of its annual revenue from the sale of personal information, that said business would be excused from penalization while simultaneously profiting from data derived from interactions leading to youth harm and death.

Relevant Provisions: 22604.1 2(a)

10
Uncommon Technical Terms Left Undefined

The initiative text is riddled with a litany of uncommon, yet seemingly technical, terms that are not defined. For instance, the initiative specifically requires that providers implement “age-appropriate risk prompts.” It is unclear what that means and it is not defined at any point in the proposed statute.

Without additional contemplation and extrapolation by the legislature of these definitions, California risks allowing industry the opportunity to continue to self-police and creates confusion for companies trying to comply.

Relevant Provisions: 22604.1 (5)(a)

11
Age Verification Uses Contradictory Language

Section 22601.5 says that providers should implement technology designed to estimate a user's “age range.” However, the next line says that a provider “must treat the age signal received from their age estimation as the actual age of the user.”

If the age estimation technology estimates that a user is between 16–25, should the provider assume they are underage? It is unclear. Several times in this section the text goes back and forth between talking about an “age range” vs using an estimate as “the actual age.” This is another type of error that can easily be fixed in the legislative process, but that cannot be straightforwardly fixed as a ballot initiative.

Relevant Provisions: 22601.5 (a)

12
Negative Impacts on Class Action Lawsuits

The savings clause in 22605 (c) says that the provision limiting enforcement to the AG should not be taken to prevent a child or parent “in their individual capacity and not as a class representative or member of a class” from seeking damages under any other provision of state law.

The construction implies that this initiative could in fact prevent a child or parent from vindicating their rights under existing law if they attempt to do so in a class action — which can be one of the most effective means of holding companies accountable.

Relevant Provisions: 22605 (c)

13
Weak Safety Plans Could Limit Future Liability

The risk assessment and mitigation provisions are poorly defined and potentially limit enforcement actions and liability for injuries caused by AI chatbots. Several provisions significantly limit accountability and jeopardize future enforcement actions.

Compliance with a generally weak child safety plan that nonetheless meets the poorly defined requirements of Section 22604 could limit individuals' ability to establish individual claims for injury caused by chatbots. This is especially concerning considering that these AI companies are not shielded from liability under Section 230.

Relevant Provisions: 22604 and 22605

Add Your Organization’s Name

Join the coalition calling on OpenAI to withdraw this initiative and support real child safety legislation.

Sign the Letter

Stand with us for real child safety protections.

Sign the Letter