California AI Safety Bill sparks uproar in Silicon Valley
It would require AI companies to create a 'kill switch' to disable powerful AI models in case of emergencies
California is at the centre of a heated debate over AI safety, with a proposed law sparking outrage from tech giants and cautious optimism from safety advocates.
The bill, authored by state senator Scott Wiener, would require AI companies to establish rigorous safety frameworks, including a "kill switch" to disable powerful AI models in case of emergencies.
Dubbed Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, the bill specifically targets "extremely large" AI systems with the potential to generate disastrous instructions.
It proposes a new government agency to oversee AI development and prevent companies from creating AI with "hazardous capabilities" like starting wars or aiding cyberattacks.
The state attorney general would be empowered to take legal action against violators.
The bill has garnered support from prominent AI researchers who advocate for guardrails on powerful AI. However, tech companies are threatening to relocate if the legislation passes.
They argue the bill is overly restrictive, stifling innovation and burdening developers with unnecessary compliance costs.
They also fear the "kill switch" requirement could hinder the development of open source AI models, where code is freely available for public use and modification.
"If someone wanted to come up with regulations to stifle innovation, one could hardly do better," Andrew Ng, a computer scientist who led AI projects at Alphabet's Google and China's Baidu, told the Financial Times.
"It creates massive liabilities for science-fiction risks, and so stokes fear in anyone daring to innovate," he added.
Meta, a major tech player, has publicly criticised the bill.
Arun Rao, a product manager at Meta, slammed the proposal on social media platform X, calling it "unworkable" and a potential death knell for open-source AI in California.
Senator Wiener, however, maintains the legislation is a "light-touch" approach designed to ensure AI development prioritises safety. He argues for proactive measures to mitigate potential dangers before they escalate, emphasising the importance of responsible innovation.
Adding a layer of complexity around the bill is the involvement of the Center for AI Safety (CAIS), a non-profit organisation which co-sponsored the bill.
CAIS is run by computer scientist Dan Hendrycks, who is the safety adviser to Musk's AI start-up, xAI.
CAIS's funding sources and lobbying activities have raised eyebrows among critics who question the speed with which the bill progressed through the Senate.
Despite the controversy, some experts believe the California bill, with potential amendments, represents a necessary step towards responsible AI development. Professor Rayid Ghani of Carnegie Mellon University suggests focusing on specific use cases, as with the EU regulations, as opposed to simply regulating model development.
The bill was passed by the California's Senate last month and the state Assembly is set to vote on the bill in August.
The California bill comes on the heels of the European Union's recent legislation aimed at regulating AI.
Initially focused on limited-purpose AI used in tasks like CV screening, the EU law pivoted to address the emergence of powerful general-purpose AI models like ChatGPT. The EU regulations require developers, including those from the US, to disclose details about the vast amount of data used to train these models and to comply with EU copyright laws.
The US federal government and the UK have also introduced initiatives aimed at regulating AI development and use.
In October last year, President Biden signed an executive order mandating companies to report on potential AI-related risks. Under the executive order, companies developing AI systems will be required to report any risks associated with their technology that could potentially be exploited by countries or terrorists in the creation of weapons of mass destruction.
Earlier this month, current and former employees at prominent AI companies including OpenAI, DeepMind and Anthropic, signed an open letter calling for AI companies to sign up to a set principles around safety and transparency.