California Gov. Gavin Newsom signed an executive order on March 30 requiring AI companies seeking to do business with the state to implement policies to prevent the misuse of their technology.
The order directs state agencies to recommend, within 120 days, new certifications for companies seeking to do business with California “to attest to and explain their policies and safeguards” to prevent their technology from being misused, including to distribute illegal content, such as child sexual abuse material.
It also requires companies to ensure their AI models are not used in ways that display “harmful bias” or violate civil rights and civil liberties, such as “free speech, voting, human autonomy, and protections against unlawful discrimination, detention, and surveillance,” the order states.
To avoid misinformation, it instructs the California Department of Technology to develop best-practice guidance on watermarking AI-generated or manipulated images and videos within 120 days.
“I just signed an executive order to actively make sure AI companies working with the state protect privacy and civil liberties,” the governor said in a post on X.
Supply-Chain Risks
The order also directs the state’s chief information security officer to review any companies designated by the federal government as supply-chain risks, and if the designations are deemed improper, issue guidance to ensure state agencies can continue to procure from those companies.
This order comes after the Pentagon issued a formal supply-chain risk designation on artificial intelligence lab Anthropic, barring government contractors from using the firm’s technology in their work for the U.S. military.
AI company Anthropic said on March 5 that it had received a letter from the Department of War notifying the company that it had been designated a supply-chain risk. Anthropic said it declined to change the user policy for its AI product, Claude, to remove safety guardrails that prevent its use for mass surveillance and for fully autonomous weapons.
A federal judge on March 26 temporarily blocked the Department of War from continuing to designate Anthropic as a supply-chain risk. Anthropic has also filed a separate lawsuit over the designation in the U.S. Court of Appeals for the District of Columbia Circuit. In that case, the company is also seeking an order halting the designation.

Pages from the Anthropic website and the company's logos are displayed on a computer screen in New York on Feb. 26, 2026. (Patrick Sison/AP Photo)
President Donald Trump previously said Anthropic was attempting to “strong-arm” the federal government and officials elected by the American people by dictating its military policy.
Safeguards
Newsom said on March 30 that his order was intended to protect the public from any potential risks arising from the misuse of AI technology.
“California leads in AI, and we’re going to use every tool we have to ensure companies protect people’s rights, not exploit them or put them in harm’s way,” he said in a statement. “While others in Washington are designing policy and creating contracts in the shadow of misuse, we’re focused on doing this the right way.”
Trump signed an executive order in December 2025 that called for the creation of national AI standards and instructed the attorney general to establish an “AI Litigation Task Force” to challenge state AI laws.
Trump said in his order that U.S. dominance in AI would strengthen national and economic security, but excessive state-level regulations are stifling innovation in the country’s AI industry.
In November 2025, the president said on social media that AI investments could help boost the U.S. economy, but that “overregulation by the states” is threatening to undermine that growth.
“We MUST have one Federal Standard instead of a patchwork of 50 State Regulatory Regimes. If we don’t, then China will easily catch us in the AI race,” he said in a post on Truth Social.
The president said that some states tried to “embed DEI ideology into AI models,” producing what he described as “woke AI.” DEI refers to diversity, equity, and inclusion.
Trump signed another executive order in July 2025 targeting “woke AI,” directing federal agencies to procure only large language models that are “truth-seeking” and politically neutral—which the order defines as AI models that “do not manipulate responses in favor of ideological dogmas such as DEI.”
Matthew Vadum and Reuters contributed to this report.









