President Biden issues executive order to mitigate AI risks
The order constitutes the most robust set of regulations introduced worldwide to ensure the safety and responsible development of AI, according to White House officials
In a significant move to address the growing concerns surrounding AI technology, President Biden yesterday signed a sweeping executive order mandating companies to report on potential AI-related risks.
The order seeks to enhance transparency, labelling, and safety standards for AI systems, marking a pivotal step in governing this transformative technology.
Under the new executive order, companies developing AI systems will be required to report any risks associated with their technology that could potentially be exploited by countries or terrorists in the creation of weapons of mass destruction.
Additionally, AI developers will be compelled to share the results of safety tests with the US government, in alignment with the Defense Production Act, before these AI systems can be made available to the public.
The measures are being described by administration officials as the most robust set of regulations introduced worldwide to ensure the safety and responsible development of AI.
"AI is all around us. Most of it is making our lives better," President Biden stated during a reception at the White House.
"One thing is clear: To realize the promise of AI and avoid the risks, we need to govern this technology…There's no way around it. It must be governed."
President Biden further indicated that he would be meeting with Senate Majority Leader Chuck Schumer and a bipartisan group on Tuesday to emphasise the need for congressional action to complement the executive order.
One key aspect of the executive order is the instruction to the Commerce Department to develop standards for watermarking AI-generated content. The Department will issue guidance for the labelling of content to show if it has been AI-generated, thereby assisting in the identification of deepfake content. The potential for deepfake content to interfere with democratic processes is one of the most frequently expressed concerns about AI.
The executive order also addresses the issue of training data for large AI systems and the evaluation of how government agencies collect and use commercially available data, especially when it involves personal identifiers.
The order also directs agencies to set standards for testing AI systems for various risks, including biological, chemical, nuclear, radiological, and cybersecurity threats. In particular, new standards for biological sythesis screening will be developed, to try and mitiage the risk of AI being used to develop bioweapons.
Tech companies will also be required to share the test results for AI systems with the US government in advance of their wider release.
Another significant part of the order is the directive for the use of AI to identify and rectify vulnerabilities in infrastructure critical software. It also mandates that cloud-service providers and resellers must promptly notify the government when foreigners utilise their services to train large AI systems.
While the executive order has been praised by some for its commitment to AI safety and security, it has faced criticism from parts of the tech industry.
NetChoice, a tech industry trade group, has expressed concerns that these regulatory measures could deter developers from training new AI systems and has labelled the order as an overreach.
European officials have been making strides in developing their own AI regulations, some of which have proposed banning specific AI applications.
Prime Minister Rishi Sunak is hosting an AI safety summit later this week at Bletchley Park, which will be addressed by US Vice President Kamala Harris.
Sunak has stressed that only governments can address the risks posed by AI and has highlighted the potential for AI to be misused for the creation of dangerous weapons or to escape human control.