AI-generated content amplified harmful narratives during global elections

Report highlights the growing risk that AI-driven content presents to public trust in information

Image:
AI risks eroding trust in the information environment

The Alan Turing Institute has released a comprehensive report revealing how AI-generated content amplified conspiracy theories and harmful narratives amid a year marked by major global elections. Researchers are now urging policymakers, regulators, and social media platforms to take immediate action.

"As 2024 draws to a close, more than 2 billion people in at least 50 countries will have voted in the biggest election year in history," the report by the Alan Turning Institute's Centre for Emerging Technology and Security (CETaS) states.

"At the start of the year, there were significant concerns over the proliferation of new generative AI models, which allow users to create increasingly realistic synthetic content. There has been persistent speculation about how these tools could disrupt key elections this year, many of which will have major consequences for international security."

The researchers say they did not find any conclusive evidence that AI-generated content directly altered the outcomes of any major elections, including the recent US presidential election.

However, the findings highlight the growing risk that AI-driven content presents to public trust in information.

According to the report, AI technology risks "eroding trust in the information environment, allowing harmful narratives to thrive" – a concern heightened by the growing accessibility of generative AI tools that enable disinformation campaigns on a large scale.

The report outlines how AI-powered tools have been used in widespread disinformation efforts, with AI bot farms imitating genuine voter behaviour and disseminating false information, including conspiracy theories and fake endorsements from public figures.

CETaS researchers documented several high-profile incidents where AI-generated content went viral, stoking fears of targeted voter manipulation and undermining the integrity of election discourse.

To mitigate these risks, the researchers propose a multi-pronged approach, including implementing measures to make it harder to create and spread disinformation, such as embedding digital watermarks in government-produced content to verify its authenticity.

They also propose limiting the reach of disinformation by developing tools to detect deepfakes and enforcing stricter regulations on social media platforms.

Other recommendations include educating the public about the dangers of disinformation, teaching them how to critically evaluate online content, and granting researchers and civil society organisations access to social media platform data to better understand and combat disinformation campaigns.

Additionally, the researchers call for mandatory programmes in schools to teach students about the risks of AI-generated misinformation and how to spot it.

Despite new safeguards implemented by some prominent AI companies, such as OpenAI, Google DeepMind, and Meta AI, to prevent misuse of their generative tools, concerns remain. These companies have instituted policies to limit non-consensual mimicry of public figures in an effort to mitigate the risk of harmful content.

However, UKTN recently reported that London-based AI startup Haiper, backed by Octopus Ventures, lacks these stringent guardrails, potentially enabling the spread of damaging content.

"Given the signs that AI-enabled threats began to damage the health of democratic systems globally this year, complacency must not creep into government decision-making. Ahead of upcoming local, regional and national elections – from Australia and Canada in 2025 to Scotland in 2026 – there is now a valuable opportunity to reflect on the evidence base and identify measures to protect voters," the researchers state.