
In brief
A coalition of advocacy groups asks OpenAI to withdraw a California AI safety ballot initiative.
Critics say the measure would limit legal accountability and weaken protections for children.
While OpenAI has paused the campaign, the coalition claims it retains control of the initiative ahead of key deadlines.
A coalition of advocacy groups is urging ChatGPT developer OpenAI to withdraw a California ballot initiative that critics say could weaken protections for children and limit legal accountability for AI companies.
In a letter sent to OpenAI on Wednesday, reviewed by Decrypt, the group argues that the measure would lock in narrow child-safety protections, limit families’ ability to sue, and restrict California’s ability to strengthen AI laws in the future.
The letter, signed by more than two dozen organizations including AI policy non-profit Encode AI, the Center for Humane Technology, and the Electronic Privacy Information Center, asks OpenAI to dissolve its ballot committee and step back from the proposal while lawmakers work on legislation.
“The main demand here is for OpenAI to withdraw from the ballot,” Adam Billen, co-executive director of Encode AI, told Decrypt.
The dispute centers on a proposed “Parents & Kids Safe AI Act,” a California ballot initiative backed by OpenAI and Common Sense Media that would establish rules for how AI chatbots interact with minors, including safety requirements and compliance standards.
In the letter, the groups argue that those rules fall short. They say the measure defines harm too narrowly, limits enforcement, and restricts families’ ability to bring claims when children are harmed.
But OpenAI controls the actual ballot initiative, Billen said.
“OpenAI has the power to withdraw it or put the money in for signatures. All of the legal authority rests in their hands,” he said. “They have not actually withdrawn the initiative from the ballot. This is a common tactic in California, where you put an initiative up and put money in the committee.”
The letter points to the initiative’s definition of “severe harm,” which focuses on physical injury tied to suicide or violence, excluding a range of mental health impacts that researchers and families have raised as concerns.
It also highlights provisions that would bar parents and children from bringing claims under the initiative and limit enforcement tools available to state and local officials.
Another concern centers on how the proposal treats user data. The groups argue that its definition of encrypted user content could make it harder to access chatbot conversations that have served as key evidence in recent lawsuits.
“We read that as an attempt to block families from being able to disclose their dead children’s chat logs in court,” Billen said.
The letter also warns that the measure could be difficult to revise if passed. It would require a two-thirds vote in the legislature to amend and tie future changes to standards such as supporting “economic progress,” which advocates say could limit lawmakers’ ability to respond to new risks.
Billen said the initiative remains a factor in ongoing negotiations in Sacramento, even as OpenAI has paused its efforts to qualify it for the ballot.
“They have $10 million in the committee, and then you say to the legislature, if you don’t do what we want, we’ll put the money in and get the signatures and put this on the ballot, and if it passes, it will override whatever the legislature does,” he said. “So essentially, what’s happening now is they’re trying to steer and control what state legislators do through the use of the initiative as a threat they’re leaving on the table.”
OpenAI is not the only company facing scrutiny over chatbot-related harms. Earlier this month, the family of Jonathan Gavalas sued Google, claiming that Gemini pushed a delusion that escalated to violence and his ultimate suicide. Billen, however, said OpenAI’s approach reflects a broader pattern in the tech industry.
“The lobbying playbook that’s getting used on AI from these big guys in particular—the Googles, the Metas, Amazons—is the same strategy that was used previously on other tech issues,” he said.
For now, the coalition is focused on getting OpenAI to withdraw the measure and allow lawmakers to move forward through the legislative process.
“It’s really important, particularly for the companies that are putting that technology out there, to not be the ones who are writing the rules that regulate them, because that’s not meaningful protections,” Billen said.
OpenAI did not immediately respond to Decrypt’s request for comment.
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.

