1000 researchers, CEOs: 'pause development of AI models bigger than GPT-4'
1,100 people include Elon Musk, Steve Wozniak and Stuart Russell sign open letter
Open letter warns of risks to humanity caused by emergent behaviours as large AI models tend towards artificial general intelligence (AGI)
More than 1,000 scientists, academics, authors and researchers have signed an open letter urging AI labs to pause development of "human competitive" AI systems.
Signatories include some notable names including Elon Musk, Steve Wozniak, co-founder of the Berkeley Center for Human-Compatible AI professor Stuart Russell, Sapiens author professor Yuval Noah Harari, US politician Andrew Yang, engineers working for DeepMind, and CEOs of AI startups and representatives from governing bodies and think-tanks.
The open letter, titled Pause Giant AI Experiments was drawn up by the Future of Life Institute, a non-profit focused on technological risk. It claims that competitive pressures are driving AI development at such a pace that it risks running out of control with disastrous consequences.
"Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones?" the letter says.
"Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation?"
See also: Social media: what happens when AI takes over?
It argues that the breakneck developments of recent months have occurred without sufficient oversight or considerations of harms. It says that AI labs and independent experts and should come together urgently to develop and implement a set of verifiable shared safety protocols for advanced AI design and development. AI developers should also work with policymakers to "dramatically accelerate" the development of AI governance systems.
This would not imply a pause in all AI research, the Future of Life Institute emphasises, just the "dangerous race to ever-larger unpredictable black-box models with emergent capabilities."
Emergent capabilities are unpredictable properties that arise from interactions within large complex systems. In a research paper, OpenAI itself identifies such risky emergent behaviours in its large models, including "the ability to create and act on long-term plans, to accrue power and resources ('power-seeking'), and to exhibit behaviour that is increasingly 'agentic'," meaning the system starts to pursue goals it was not designed for and which were not present in its training regime.
In a blog post, OpenAI also said recently that as narrow specific AI systems become more generalised, "at some point, it may be important to get independent review before starting to train future systems."
The signatories concur: "We agree. That point is now."
The letter continues: "Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium."
Properly regulated, AI could be hugely beneficial, the open letter, which had gathered 1,125 signatories at time of publication, concludes.
"Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an 'AI summer' in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt."