IT Essentials: Time to take the deepfake fight to Big Tech

Speak softly and carry a big stick

IT Essentials: Time to take the deepfake fight to Big Tech

With most deepfakes coming from overseas, legislation can only have a limited impact until we crack down on the companies actually hosting AI content.

James Cleverly's remarks about the potential for AI-generated deepfakes to be used for electoral manipulation stuck out to me this week.

This is, of course, already happening. Politicians around the world are the subject of pretty convincing AI fakery, much of it coming from nation states. Increasingly, though, deepfakes are being democratised, with criminals offering as-a-service operations at affordable prices.

I don't envy the government having to handle this area. It isn't an issue that can be solved by more legislation (though what we do have, which only bans sexually explicit deepfakes, could do with an amendment); instead, it's going to require both carrot and stick.

The carrot

Let's start with the carrot. Diplomacy is the name of the game here. Not between nation states, even though the Home Secretary focused on state-generated misinformation. Rather, the government will need to put in some serious work to woo tech firms like Meta, X and Google, which host the majority of deepfake content.

There's a rising tide of sentiment and legislation that trends against Big Tech, but attempts to pursue and punish them take inordinate amounts of time and money. At the same time, they've proven completely unwilling to engage with governments. Making serious efforts to change that, without becoming a supplicant, is the first step.

And the stick

The stick is finally cracking down on the difference between platform and publisher (you can be both, by the way). It is truly ridiculous that, in 2024, social media companies still refuse to take responsibility for content they host, insisting they are a passive distribution platform like an ISP.

These companies have clear editorial viewpoints of what is and isn't allowed, casting doubt on that claim. Content creators are regularly de-monetised on YouTube. Meta cracked down on Covid conspiracy theorists, and X - when it was Twitter - de-platformed Donald Trump. All solid, necessary moves - and also editorial decisions.

You can't do those things and claim not to be a publisher, even if you overtly kill off your news product (like Meta did last week). That these companies, which have more control over public access to information than most governments, can still attempt to do shows the extent of their legal power.

I talked to Deryck Mitchelson, global CISO at Check Point, while writing this editorial. He favours a "two-pronged approach" to tackle deepfakes, the first being an industry-standard mechanism of tagging AI-generated content (the EU's AI Act attempts this), and the second for governments to work with tech giants to make sure they are held accountable.

He also pointed out social media's inherent aversion to controlling anything that drives engagement:

"The problem as I see it is that our social platforms have been built to monetise user interaction from content, with their AI algorithms often amplifying similar messaging, whether fake or not. It is morally correct for the tech giants to reduce the sharing, impact and harm of deepfakes, but not necessarily commercially sound to do so. That is the dilemma."

Where potential harm butts up against cold commercial reality, we cannot rely on companies - any company - to self-regulate. That is when governments need to step in.

Recommend reads

Get your eyeballs over to our massive analysis of 2024 tech trends, courtesy of more than 170 UK IT leaders. We've dug into your opinions on AI, platform consolidation, ESG initiatives and more, with some truly great information that you need to read to see which way the industry is heading.

Also this week, John Leonard talked to SAP and its customers about the company's controversial decision to de-prioritise on-prem; I discussed AI and quantum computing in critical infrastructure with National Grid CIO Sarah Milton-Hunt; and, in preparation for next week's International Women's Day, Penny Horwood reported from an AWS event about women in cyber - and got her hands on CISO Barbie.