California Gov. Gavin Newsom signed three bills Sept. 17 to address altered videos and other digitally created content known as deepfakes that could impact candidates and election campaigns.
The laws are meant to remove deepfake content of officials or candidates from major online platforms, “increase accountability, and better inform voters,” according to a news release from the governor’s office.
“Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation—especially in today’s fraught political climate,” Newsom said in a statement.
Assembly Bill 2655, introduced by Assemblyman Marc Berman (D-Menlo Park), requires online companies with more than 1 million California users to label or remove “materially deceptive” media portraying an official or a candidate for elective office during the period from 120 days before an election to 60 days after. It also requires companies to create a mechanism to report such content.
The law excludes content that is considered satire or parody.
The new law additionally gives candidates, officials, and law enforcement the authority to seek injunctive relief for violations.
Berman said the law was needed to protect campaigns from the use of artificial intelligence (AI) content that could confuse voters.
“AI-generated deepfakes pose a clear and present risk to our elections and our democracy,” he said in a statement. “Advances in AI over the last few years make it easy to generate hyper-realistic yet completely fake election-related deepfakes, but AB 2655 will ensure that online platforms minimize their impact.”
One group that sponsored the bill said the text of the new law was carefully constructed to navigate First Amendment concerns.
“AB 2655’s approach is narrowly tailored and does not extend the law to hot button controversies or inflammatory claims—it does not ask social media platforms to adjudicate controversial opinions post by post,” the nonprofit California Initiative for Technology and Democracy said in a legislative analysis. “It simply stops the use of obviously, demonstrably untrue and provably false content meant to impermissibly influence our elections at peak election times.”
Opponents argued that First Amendment protections extend to political speech and suggested the new law should have focused only on posts that are libelous or fraudulent.
“The law has long made clear that the First Amendment was intended to create a wide berth for political speech because it is the core of our democracy,” ACLU California Action said in a legislative analysis. “Unfortunately, the provisions of AB 2655 ... threaten to intrude on those rights and deter that vital speech.”
Political Advertising
Assembly Bill 2839, introduced by Assemblywoman Gail Pellerin (D-Santa Cruz), regulates political advertising by prohibiting the distribution of ads that contain deceptive or manipulated content “with malice” within 120 days of the election, and in some cases up to 60 days after.“‘Malice’ means the person, committee, or other entity distributed the audio or visual media knowing the materially deceptive content was false or with a reckless disregard for the truth,” according to the law.
Pellerin said in a statement, “With fewer than 50 days until the general election, there is an urgent need to protect against misleading, digitally altered content that can interfere with the election.”
An analysis for the Senate Judiciary Committee said the law addresses the rapid development of AI technology that made the production of synthetic content—including audio, video, images, and text—cheaper, easier, and more difficult to detect.
Some supporters said the new law is needed to safeguard electoral integrity.
“In a few clicks, using current technology, bad actors now have the power to create a false image of a candidate accepting a bribe, a fake video of an elections official ‘caught on tape’ saying that voting machines are not secure, or a robocall of ‘Governor Newsom’ incorrectly telling millions of Californians their voting site has changed,” supportive organizations, including the labor group SEIU California, said in a legislative analysis.
Opponents argued the law is too ambiguous and places the burden on those who distribute the content but are unaware of the state’s prohibitions.
“We recognize the complex issues raised by potentially harmful artificially generated election content,” the nonprofit Electronic Frontier Foundation said in a legislative analysis. “However, this bill’s ‘exceptions’ for only some types of republishers, and by requiring them to publish a disclaimer, does not reflect the full First Amendment protection due the republication of speech pertaining to matters of public interest by those not connected with the creation of the offending material.”
AI Disclosures
Assembly Bill 2355, authored by Assemblywoman Wendy Carrillo (D-Los Angeles), mandates that election advertising content created or substantially altered with AI must disclose how the material was generated.The law also authorizes the state’s Fair Political Practices Commission to enforce violations.
“The rapid improvements in AI and Large Language Models have made it easier to create convincingly fake images, videos, and sounds,” Carrillo said a statement. “Voters must be informed when generative AI is used in political advertising to substantially alter media or create misleading content.”
She acknowledged the need to respect the First Amendment but said new technologies are requiring lawmakers to rethink how best to protect society.
“Free speech and political expression are a cornerstone of our democracy, but we cannot lose sight of our humanity amid the advancement of artificial intelligence,” Carrillo said. “This is a balanced policy that makes California the first state to include artificial intelligence under its campaign transparency rules.”
One group supporting the bill said the law will improve transparency and help educate voters.
“‘Deepfake’ content can be extremely misleading and can negatively impact elections,” the Los Angeles Area Chamber of Commerce said in a legislative analysis. “The potential threat posed by manipulated media to future elections’ integrity is more significant now than it has ever been. Action must be taken in order to ensure election integrity.”
The governor also signed bills Sept. 17 to protect digital likenesses by outlawing AI cloning of actors without their consent.
He said in his press release that California is navigating a balance of regulating AI companies while maintaining the state’s competitive advantage in AI innovation.