CMA starts review of AI impact on competition and consumers
Report coming in September
The Competition and Markets Authority (CMA) has launched a consultation to examine the competition and consumer protection implications in the creation and use of foundational AI models.
Foundation models, which include large language models (LLMs) and generative AI, are trained on massive amounts of data, to understand language and generate human-like responses.
The CMA says AI advancements have raised a number of concerns, in areas such as safety, privacy, security, intellectual property and human rights.
The preliminary inquiry will focus on understand the evolution of foundational models, and examining the possibilities and risks that could arise in terms of competition and consumer protection.
The CMA plans to use the results of the investigation to produce a series of guiding principles. These principles are intended to safeguard competition and consumer interests as foundational models continue to evolve.
The regulator will collect evidence prior to releasing its report and is currently welcoming submissions from stakeholders and other pertinent parties. Those who wish to submit evidence, whether organisations or individuals, can do so by 2nd June 2023. The CMA aims to issue a report outlining its findings in September.
"AI has burst into the public consciousness over the past few months but has been on our radar for some time," Sarah Cardell, CEO of the CMA, said in a statement.
"It's a technology developing at speed and has the potential to transform the way businesses compete as well as drive substantial economic growth.
"It's crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information. Our goal is to help this new, rapidly scaling technology develop in ways that ensure open, competitive markets and effective consumer protection."
AI concerns drive regulation
The CMA's study launch comes days after Geoffrey Hinton, a renowned figure in the field of AI, resigned from Google citing concerns about the potential dangers of the technology he helped pioneer.
Dr Hinton said he had new fears about the AI technology and he wants to express them publicly. He also expressed his concerns about the automated proliferation of fake information, videos and photos created using AI tools.
The attention surrounding ChatGPT has led to intense competition among tech companies to create similar AI tools this year, with some experts worrying that safety concerns are being pushed aside.
Earlier this year, in March, the government released a white paper laying out a strategy for regulating AI technology.
That same month several prominent tech industry figures signed a letter calling for a pause in the development of the most potent AI systems for at least six months, citing societal and human risks.
Signatories included Elon Musk and Apple's co-founder, Steve Wozniak, who cautioned that the rush to create AI systems was spiralling out of control.
Last month, Google CEO Sundar Pichai also expressed his worries about the adverse implications of generative AI technology and advocated for fresh regulations to govern its usage.
This week, US Vice President Kamala Harris held meetings with Google and Microsoft, as well as AI startups OpenAI and Anthropic, to discuss the responsible development of AI technology.
The companies were invited based on President Joe Biden's expectation that tech firms must ensure "their products are safe before making them available to the public".
US Federal Trade Commission (FTC) Chair, Lina Khan, recently announced that the regulator is closely monitoring how AI technology might be exploited to breach antitrust and consumer protection laws.