Scores of lawmakers from across the political aisle in the UK have backed a campaign for Westminster to pass binding legislation on the development of powerful artificial intelligence (AI) systems.
ControlAI, a nonprofit that works to reduce the risks of AI, said in a Dec. 11 blog post that this week the number of lawmakers who now back their campaign had surpassed 100, calling it “the world’s first such coalition to recognize the threat posed by AI.”
Supporters include members of Parliament, members of the House of Lords, and lawmakers from the devolved parliaments and assemblies in Scotland, Wales, and Northern Ireland. Among them are Viscount Camrose, former minister for artificial intelligence, and Lord Browne of Ladyton, former secretary of state for defense.
ControlAI founder and CEO Andrea Miotti told The Epoch Times that, in just a year, the group had reached this milestone and that “more are joining every week.”
“Despite the repeated warnings by AI experts, most lawmakers still have not heard about superintelligence and its dangers,” he said.
“AI companies are lobbying hard to slow down public awareness about the extinction risk posed by superintelligence, but the tide is turning.
“We are honoured to work with more than 100 cross-party lawmakers who clearly understand the severity of the problem and want to take action.”
Narrow, or weak, AI is the current level of the technology that exists today, but tech giants are working to develop artificial general intelligence, or strong AI, which could match or exceed human cognitive abilities across all domains. Artificial superintelligence, if realized, would vastly exceed human capabilities.
Miotti told The Epoch Times in an emailed statement that frontier AI companies “are pouring billions of dollars to reach superintelligence as fast as possible, and some experts expect this might happen in the next 3-5 years.”
He added that “companies like Anthropic and OpenAI are focusing on AIs that can improve and develop other AIs“—sometimes known as recursive self-improvement—”with the clear goal of snowballing towards ever-more-powerful superintelligent AI that acts outside human control.”
Sam Altman, CEO of OpenAI, the tech company that develops ChatGPT, said in a September interview with Axel Springer Global Reporters Network, of which Politico Magazine is a part, that “by 2030, if we don’t have models that are extraordinarily capable and do things that we ourselves cannot do, I’d be very surprised.”
Warnings From Experts
Veterans in the field of AI have sounded the alarm over the development of advanced AI systems, including Geoffrey Hinton, the pioneering computer scientist considered one of the three godfathers of AI.
Hinton said in August that humanity risks being sidelined or replaced by advanced AI.
“The risk I’ve been warning about the most … is the risk that we’ll develop an AI that’s much smarter than us, and it will just take over,” said the Nobel Prize winner for physics and former Google executive. “It won’t need us anymore.”
MIT physics professor Max Tegmark and AI safety expert Roman Yampolskiy have also raised the alarm about the existential risk posed by the development of advanced AI.
However, some, including Marc Andreessen, the billionaire cofounder of venture capital firm Andreessen Horowitz, believe that AI will advance humanity, and Andreessen said that attempts to decelerate it “will cost lives.”
“We believe Artificial Intelligence is our alchemy, our Philosopher’s Stone – we are literally making sand think,” he wrote in his 2023 “The Techno-Optimist Manifesto.”
Targeted Regulation
In ControlAI’s campaign statement, it says that “specialised AIs – such as those advancing science and medicine – boost growth, innovation, and public services. Superintelligent AI systems would compromise national and global security.”
“The UK can secure the benefits and mitigate the risks of AI by delivering on its promise to introduce binding regulation on the most powerful AI systems,” it adds.
Miotti said that to address the threats, “countries should prohibit the development of superintelligence, AI that can autonomously compromise national security, escape human oversight, and upend international stability. This is a key first step to protect each country’s national sovereignty.”
He told The Epoch Times that countries should also start working on an international agreement to ensure advancement of superintelligent AI “is halted everywhere, and to monitor and restrict precursor technologies to superintelligence.”
“Targeted regulation on a technology that undermines national security, like superintelligence, is just common sense,” Miotti said. “We can prohibit superintelligence and restrict its precursors, while leaving untouched many benign specialised AI applications.”
A UK Department for Science, Innovation, and Technology (DSIT) spokesperson told The Epoch Times that “AI is already regulated in the UK, with a range of existing rules already in place.”
“We have been clear on the need to ensure the UK and its laws are ready for the challenges and opportunities AI will bring and that position has not changed,” the spokesperson said in an emailed statement.
Correction: A previous version of this article incorrectly stated that Geoffrey Hinton and Yoshua Bengio back ControlAI’s global campaign to mitigate the risks of AI. Rather, they are mentioned in the linked campaign page as signatories of the Center for AI Safety statement. The Epoch Times regrets the error.














