House of Lords report on AI urges more positive vision for LLMs
Calls for action on copyright issues in AI development
The House of Lords Communications and Digital Committee has issued a stark message to the UK government, warning that it risks falling behind in the 'AI goldrush.'
A new report, released today, stresses the need for the government to refocus its AI strategy.
Rather than solely prioritising hypothetical existential threats, the report - based on evidence gathered from a variety of stakeholders, including tech companies, academia and government - suggests a shift towards addressing more immediate concerns like copyright infringement and misinformation, which pose tangible risks to society.
One of the key recommendations is for the government to shift towards a more positive vision for Large Language Models (LLMs), focusing on maximising social and economic benefits while mitigating near-term security risks.
The Committee underscored the importance of supporting AI start-ups, enhancing computing infrastructure, and investing in skills development to ensure the UK remains competitive on the global stage.
One of the issues highlighted in the report is the tension between 'closed' ecosystems (where access to AI technologies is restricted) and 'open' approaches (prioritising transparency and accessibility).
The report also raised concerns about the potential for market dominance by a small number of tech giants, urging policymakers to prioritise open competition and transparency to prevent stifling innovation.
"One lesson from the way technology markets have developed since the inception of the internet is the danger of market dominance by a small group of companies," Baroness Stowell, chair of the Committee, said.
"The Government must ensure exaggerated predictions of an AI-driven apocalypse, coming from some of the tech firms, do not lead it to policies that close down open-source AI development or exclude innovative smaller players from developing AI services. We must be careful to avoid regulatory capture by the established technology companies in an area where regulators will be scrabbling to keep up with rapidly developing technology."
Copyright concerns
The report also called for measures to address copyright concerns, which have been a hot topic over the last year.
It cited ongoing legal actions, including Getty Images' claim against Stability AI in the UK and The New York Times' lawsuit against OpenAI and Microsoft in the US, as examples of the challenges posed by the use of copyrighted content in AI training.
"Many LLM developers have used extensive amounts of human‑generated content to train their models," the report says.
"We heard that much of this had taken place without permission from or compensation for rightsholders. Many felt that allowing such practices was morally unfair and economically short sighted."
"The Financial Times said there were 'legal routes to access our content which the developers … have chosen not to take'. DMG Media said its news content was being used to train models and fact check outputs, and believed the resulting AI tools 'could make it impossible to produce independent, commercially funded journalism'."
OpenAI said it "respected the rights of content creators and owners," adding that its tools aided creative professionals in innovation.
Similarly, Meta, Stability AI and Microsoft argued that restricting access to data could result in underperforming or biased models, ultimately reducing the benefits for users.
The Committee said it was disappointed that the UK government could not articulate its current legal understanding.
"We heard the government was 'waiting for the courts' interpretation of these necessarily complex matters.'"
The Committee recommends the development of a voluntary code by the Intellectual Property Office to empower creators to exercise their rights regarding the use of their content in AI training.
This code would ensure transparency regarding the use of web crawlers to acquire data for generative AI training and help reduce the risk of market dominance by large tech firms.