Google, Anthropic announce measures to combat election disinformation
Anthropic's AI will steer people towards reputable sources while Google will launch a 'prebunking' campaign
Anthropic has announced a new initiative aimed at combating election misinformation ahead of the 2024 US presidential election, while Google is preparing an initiative to help voters recognise political misinformation when they see it.
Anthropic, the AI company in which Amazon is investing $4 billion, says it is experimenting with technology designed to identify when users of its genAI chatbot Claude inquire about political subjects. Claude will redirect those users to "authoritative" sources of voting information.
"Over half of the world's population will vote this year with high profile elections taking place around the world, including in the United States, India, Europe and many other countries and regions. At Anthropic, we've been preparing since last July for how our AI systems might be used during elections," the company said in a blog post.
Claude is designed to be a "helpful, honest, and harmless" AI option. However, Anthropic acknowledges that Claude's training data lacks ongoing updates, leading to gaps in its knowledge across various topics, including politics.
For this reason, Anthropic wants to proactively guide users away from its systems when users pose inquiries on topics where "hallucinations" would be unacceptable, such as election-related queries.
Prompt Shield represents the latest innovation from Anthropic that will leverage AI rules to detect instances where US-based users inquire Claude about politics, elections and related topics.
Instead of providing direct responses, Claude will show users a pop-up offering to redirect them to TurboVote, a trusted resource provided by the non-partisan organisation Democracy Works.
"We've had Prompt Shield in place since we launched Claude - it flags a number of different types of harms, based on our acceptable user policy," an Anthropic spokesperson told TechCrunch.
"We'll be launching our election-specific prompt shield intervention in the coming weeks and we intend to monitor use and limitations."
While the implementation of Prompt Shield is currently in a limited testing phase, Anthropic says it is fine-tuning the technology for broader deployment to more users.
The company says it has consulted with various stakeholders, including policymakers, civil society organisations and election-specific consultants, in the development of Prompt Shield.
Anthropic's initiative comes amidst a broader industry trend to combat election interference and misinformation.
Google's 'prebunking' initiative
Last week, Google told Reuters it plans to launch an anti-misinformation campaign across five countries in EU ahead of European parliamentary elections in June.
The company will put out ads on social media that use "prebunking" techniques, designed to inoculate people with false messaging so they will recognise real misinformation more readily.
"Prebunking is the only technique, at least that I've seen, that works equally effectively across the political spectrum," Beth Goldberg, head of research at Google's internal Jigsaw unit, told the news organisation.
The ads will be rolled out in Belgium, France, Germany, Italy and Poland.
France last week exposed a major Russian disinformation effort aimed at undermining Western support for Ukraine.
In September, Google announced that political ads using AI must prominently disclose any synthetic alterations in imagery or sounds. The company specified that AI-generated election advertisements on YouTube and other Google platforms, which manipulate individuals or events, must prominently feature a clear disclaimer placed in a location where users are apt to observe it.
OpenAI recently announced measures to prevent ChatGPT users from engaging in activities such as impersonating political candidates, misrepresenting voting processes, or discouraging voter participation. It also rolled out three tools in 2023, aimed at empowering users to verify image authenticity, understand image usage and access metadata, as part of its efforts to combat misinformation online.
Meta has also prohibited political campaigns from using genAI tools, including its own, for advertising purposes across its platforms.
In August, the US Federal Election Commission initiated proceedings to potentially regulate AI-generated deepfakes in political advertisements leading up to the 2024 election. Additionally, various states in the US have deliberated on or enacted legislation pertaining to deepfake technology.