Artificial intelligence companies are pushing back against California state lawmakers’ demand that they install a “kill switch” designed to mitigate potential dangers posed by the new technology — with some threatening to leave Silicon Valley altogether.

Scott Wiener, a Democratic state senator, introduced legislation that would force tech companies to comply with regulations fleshed out by a new government-run agency designed to prevent AI companies from allowing their products to gain “a hazardous capability” such as starting a nuclear war.

Wiener and other lawmakers want to install guardrails around “extremely large” AI systems that have the potential to spit out instructions for creating disasters — such as building chemical weapons or assisting in cyberattacks — that could cause at least $500 million in damages.

The measure, supported by some of the most renowned AI researchers, would also create a new state agency to oversee developers and provide best practices, including for still-more powerful models that don’t yet exist.

Scott Wiener, a Democratic lawmaker in the California State Senate, is proposing legislation to regulate AI. Getty Images

The state attorney general also would be able to pursue legal actions in case of violations.

But tech firms are threatening to relocate away from California if the new legislation is enshrined into law.

The bill was passed last month by the state Senate.

A general assembly vote is scheduled for August. If it is passed, it goes to the desk of Gov. Gavin Newsom.

A spokesperson for the governor told The Post: “We typically don’t comment on pending legislation.”

A senior Silicon Valley venture capitalist told Financial Times on Friday that he has fielded complaints from tech startup founders who have mused about leaving California altogether in response to the proposed legislation.

“My advice to everyone that asks is we stay and fight,” the venture capitalist told FT. “But this will put a chill on open source and the start-up ecosystem. I do think some founders will elect to leave.”

The biggest objections from tech firms to the proposal are that it will stifle innovation by deterring software engineers from taking bold risks with their products due to fears of a hypothetical scenario that may never come to pass.

“If someone wanted to come up with regulations to stifle innovation, one could hardly do better,” Andrew Ng, an AI expert who has led projects at Google and Chinese firm Baidu, told FT.

Lawmakers have grappled with how to regulate AI, which has advanced rapidly in recent years. AP

“It creates massive liabilities for science-fiction risks, and so stokes fear in anyone daring to innovate.”

Arun Rao, lead product manager for generative AI at Meta, wrote on X last week that the bill was “unworkable” and would “end open source in [California].”

“The net tax impact by destroying the AI industry and driving companies out could be in the billions, as both companies and highly paid workers leave,” he wrote.

Prominent Silicon Valley tech researchers have expressed alarm in recent years over the rapid advancement of artificial intelligence, saying that the consequences for humans could be dire.

“I think we’re not ready, I think we don’t know what we’re doing, and I think we’re all going to die,” AI theorist Eliezer Yudkowsky, who is viewed as particularly extreme by his tech peers, said in an interview last summer.

Yudkowsky echoed concerns voiced by the likes of Elon Musk and other tech figures who advocated a six-month pause on AI research.

Musk said last year that there’s a “non-zero chance” that AI could “go Terminator” on humanity.

AI firms in California have threatened to leave the state over a proposed law that would require them to install a “kill switch.” Rafael Henrique/SOPA Images/Shutterstock

Worries about artificial intelligence systems outsmarting humans and running wild have intensified with the rise of a new generation of highly capable AI chatbots such as ChatGPT.

Earlier this year, European Union lawmakers gave final approval to a law that seeks to regulate AI.

The law’s early drafts focused on AI systems carrying out narrowly limited tasks, like scanning resumes and job applications.

The astonishing rise of general purpose AI models, exemplified by OpenAI’s ChatGPT, sent EU policymakers scrambling to keep up.

They added provisions for so-called generative AI models, the technology underpinning AI chatbot systems that can produce unique and seemingly lifelike responses, images and more.

Developers of general purpose AI models — from European startups to OpenAI and Google — will have to provide a detailed summary of the text, pictures, video and other data on the internet that is used to train the systems as well as follow EU copyright law.

Some AI uses are banned because they’re deemed to pose an unacceptable risk, like social scoring systems that govern how people behave, some types of predictive policing and emotion recognition systems in school and workplaces.

With Post Wires