OpenAI expands lobbying team to shape AI regulation

Global affairs team has grown more than 10x in a year

OpenAI expands lobbying team to shape AI regulation

OpenAI, creator of ChatGPT, is expanding its lobbying team to influence legislation aimed at regulating AI technology.

The San Francisco-based startup told the Financial Times that it has grown its global affairs team from just three at the beginning of 2023 to 35 currently, with plans to reach 50 by the end of 2024.

The growth spurt comes as governments around the world grapple with how to govern AI development and deployment, which could affect OpenAI's ability to innovate and commercialise its technology.

"We are not approaching this from a perspective of we just need to get in there and quash regulations...because we don't have a goal of maximising profit; we have a goal of making sure that AGI benefits all of humanity," Anna Makanju, OpenAI's vice-president of government affairs, told the FT.

OpenAI's global affairs team represents the company's most international unit, strategically placed in regions with advanced AI policy discussions, including Belgium, Ireland, France, India, Singapore, Brazil, the UK and the USA.

However, it lags behind tech giants like Meta and Google in lobbying muscle.

In the first quarter of 2024, Meta spent a record $7.6 million engaging with the US government compared to OpenAI's $340,000.

David Robinson, head of policy planning at OpenAI, said the company had just three people to handle public policy when ChatGPT had 100 million users.

"It was literally to the point where there would be somebody high level who would want a conversation, and there was nobody who could pick up the phone," he added.

OpenAI's lobbying efforts focus on shaping AI legislation. For example, the company took part in discussions around the EU's AI Act, one of the most comprehensive pieces of AI regulation to date.

It argued that some of its models shouldn't be classified as "high risk" under the Act, potentially avoiding stricter regulations.

The company has opposed granting regulators access to pre-training data: vast datasets used to train large language models.

OpenAI believes post-training data, used for fine-tuning models for specific tasks, is a better indicator of potential risks.

The EU ultimately included OpenAI's most advanced models under the AI Act's purview and granted regulators access to training data for high-risk systems in specific cases.

Following the EU's legislation, OpenAI hired prominent lobbyists like Chris Lehane, who previously worked for the Clinton administration and Airbnb.

In March, Reuters reported that OpenAI had hired former Republican senator Norm Coleman to lobby for the organisation. Coleman's hiring was disclosed by his law firm, Hogan Lovells, in a lobbying registration filing.

In November 2023, the company hired Chan Park, formerly a senior director for Microsoft, to lead its US and Canada policy and partnerships team.

OpenAI's lobbying efforts haven't been without criticism. Some industry figures argue the company is shifting from specialists in AI policy to generic tech lobbyists, mirroring the tactics of Big Tech companies.

OpenAI, however, maintains that its lobbying efforts aim to achieve "safe and broadly beneficial" AI development while fostering innovation.

Former NSA head joins OpenAI board

OpenAI has also announced the appointment of Paul M. Nakasone, a retired US Army general and former head of the National Security Agency (NSA), to its board of directors.

Nakasone, who led the NSA from 2018 until February this year, will now play a crucial role in OpenAI's safety and security initiatives.

"General Nakasone's unparalleled experience in areas like cybersecurity will help guide OpenAI in achieving its mission of ensuring artificial general intelligence benefits all of humanity," said Bret Taylor, Chair of OpenAI's Board.

The addition of Nakasone to the board follows recent significant safety-related departures from OpenAI. Notably, co-founder and chief scientist Ilya Sutskever and researcher Jan Leike, both integral to the company's safety initiatives, have left the organisation.

Sutskever was involved in the controversial firing and subsequent reinstatement of Altman in November, while Leike expressed concerns on social media about safety practices taking a backseat to product development.