News
Man Goes on Hunger Strike in San Francisco Calling for Stop to AI Development
Comments
Link successfully copied
Guido Reichstadter conducts a hunger strike in front of the SF headquarters of AI startup Anthropic on Sep. 10, 2025. (Lear Zhou/The Epoch Times)
By Lear Zhou
9/15/2025Updated: 9/15/2025

SAN FRANCISCO—Resident Guido Reichstadter is conducting a hunger strike in front of the headquarters of artificial intelligence (AI) company Anthropic located on Howard Street in San Francisco, calling for the end of advanced AI development.

The 45-year-old former Florida jewelry business owner said he put his 20-year career on hold to move to San Francisco in 2022 to warn people about the danger of advanced AI.

On the ninth day of his recent protest, Reichstadter, living on zero-calorie electrolytes and vitamins, told The Epoch Times he delivered a letter addressed to Anthropic CEO Dario Amodei on Sept. 2, asking him to stop developing such technology and do everything in his power to stop the global AI race.

“If he was unwilling to do that, then to meet with me face-to-face as a human being and explain why he feels he has the right to put our society in danger,” Reichstadter said.

Anthropic, together with others like Google, Meta, OpenAI, and xAI, is one of the leading companies aiming to develop artificial general intelligence (AGI) that would have human-like intelligence and could act autonomously and pursue goals.

“Right now, there’s hundreds of billions of dollars of capital, thousands of the world’s best engineers in companies like this one and around the world trying to develop these systems. This is a situation which has never existed before in my life,” Reichstadter said.

Modern AI, usually trained with huge amounts of data, has already outperformed humans in some particular fields.

However, these capabilities could also be opportunities for bad actors. In a report issued by Anthropic on Aug. 27, the company announced that cybercriminals and other malicious actors have exploited its product Claude to perform large-scale hacking and extortion operations.

Anthropic didn’t reveal the names of the 17 victim organizations, but said they included health care companies, emergency services, and government and religious institutions.

In Reichstadter’s opinion, current advanced AI models are unpredictable and uncontrollable. He said, “If they release models which are capable of designing bio weapons, even with safeguards to prevent users from accessing that ability, we should expect that users, or some bad actors, will find a way to use that, exploit it.”

He said that “this company is not fulfilling its duty to make sure that their society understands the risk.”

Reichstadter called for a stop to the global race to develop artificial general intelligence.

“We have an obligation to stop this path of development before it crosses that point of no return,” he said.

‘Potential of Civilizational Destruction’


Tech entrepreneur Elon Musk also warned in 2023 that when advanced AI reaches a certain point, “it has the potential—however small one may regard that probability, but it is nontrivial—it has the potential of civilizational destruction.”

Musk predicted on Sept. 14 on the “All-In” podcast, “I think that we might have AI smarter than any single human at anything as soon as next year.”

According to a February 2023 report by OpenAI, as AGI progress continues, “the world could become extremely different from how it is today, and the risks could be extraordinary.”

“A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too,” the report reads.

Gary Marcus, a scientist and professor emeritus at New York University, said that advanced AI currently doesn’t work very well. However, that doesn’t mean it’s not dangerous.

“Now, there might be a future AI that might be dangerous ... because it’s so smart and wily that it outfoxes the humans,” Marcus said in an interview with IEEE Spectrum, a leading engineering magazine. “But that’s not the current state of affairs.”

Reichstadter said no one can predict when the first AGI will make its debut. “In fact, it may be a breakthrough that’s sitting in the minds of one of these engineers right now,” he said. “That’s why the urgency is incredible, because the consequences are so large.”

Reichstadter suggested a global accord, a global regulation, or a global agreement between the world’s superpowers to prohibit the development of advanced AI.

“We don’t know how long it will take to reach that agreement,” said Reichstadter. “If we’re unable to do that before it’s built, then the negative consequences become inevitable.”

Share This Article:

©2023-2025 California Insider All Rights Reserved. California Insider is a part of Epoch Media Group.