New Online Safety Act measures come into force
Time for risk assessment has passed, platforms must act on online harms
Ofcom’s risk assessment deadline has passed, and online platforms must now take action to protect users from illegal content and activity occurring on their platforms. Significant fines await those who fail.
Today marks a significant milestone in efforts to make the internet safer for users in the UK as key provisions of the Online Safety Act come into effect.
The deadline for carrying out illegal harms risk assessment has expired, and online platforms must now take action to protect users from illegal content and activity occurring on their platforms.
Ofcom has also launched its latest enforcement programme which is to assesses industry compliance.
Speaking on a panel discussing the intersection of online safety and privacy at the recent IAPP Data Protection Intensive event, Jon Hingham, Online Safety Policy director at Ofcom, set out what this would mean in practice and how the regulator has tried to balance safety and privacy.
“In our efforts to implement the regulatory regime, we focus on striking a balance between, on one hand, achieving robust protections whilst on the other hand, ensuring that regulation doesn't have a chilling effect either on privacy rights or freedom of expression or on competition in the market,” Hingham said.
“Firstly, we set out a clear expectation that online service providers should operate clear accessible to use reporting mechanisms for reporting illegal content. People are often deterred from reporting illegal content because they find it hard to navigate reporting systems.
“Secondly, we set out obligations which should have the effect of ensuring that all service providers resource their content moderation teams adequately. We want content moderators resourced and trained so that they can accurately recognise illegal content.
“Thirdly, illegal content should be promptly removed.”
According to a statement from Ofcom this morning, the regulator will be assessing platforms’ compliance with these new illegal harms obligations under the OSA, and will launch targeted enforcement action if necessary.
Tackling CSAM
Child sexual abuse material (CSAM) is an early priority for enforcement of the Act, with file sharing and file storage providers being “put on notice” that they will shortly be receiving formal information requests regarding the measures they have in place, or will soon have in place, to tackle CSAM, and requiring them to submit their illegal harms risk assessments to Ofcom.
Ofcom has the power to levy significant fines if a platform fails to co-operate, including being able to issue fines of up to 10% of turnover or £18 million – whichever is greater – or to apply to a court to block a site in the UK in the most serious cases.
Ofcom is working towards an additional consultation on further codes and measures in Spring this year. The consultation will include proposals in the following areas:
- blocking the accounts of those found to have shared CSAM
- use of AI to tackle illegal harms, including CSAM
- use of hash-matching to prevent the sharing of non-consensual intimate imagery and terrorist content; and
- crisis response protocols for emergency events (such as last summer’s riots).
The IAPP panel last week discussed the second of these because of its potential impact on privacy. Jon Hingham said:
“There is a strong case that to moderate content effectively at scale, service providers need to use proactive tech. But set against that there are risks posed by some forms of proactive tech on privacy rights. Where proactive tech is not sufficiently accurate, it could result in proactive take down of content, which would have a chilling effect of freedom of expression. And historically, proactive tech has been better at detecting some forms of content than others.
“We have made a number of recommendations around areas where we think service providers should be using proactive tech such as the use of hash matching to detect known Child Sexual Abuse material and we will be talking in the Spring consultation about more targeted measures for proactive tech but we're careful only to make recommendations in relation to proactive tech when we're confident that tech exists which is accurate, effective and free from bias.
“We also engage closely with the ICO before publishing any proposals in that area, it's better at protecting users from some forms of content than others.”
Impact of proactive tech on privacy
Lorna Christie is a Principal Policy Adviser on Online Safety at the ICO. She expressed concern about the impact of using machine learning and AI to make assessments about content or users. These tools also allow profiling on a large scale will likely be making automated decisions about users and their access to services. This comes with risk.
“The first point is that we would be expecting services to be carrying out the data protection impact assessment, to fully assess the risks associated with the use of those tools and to understand what those are and put in place measures to mitigate those risks.”
The second point about proactive tools is that they've got the potential to bring together a wide range of information about users and rapidly analyse that information. The data that's used in those tools must be proportionate and necessary to the aim that the service has in mind.
“I think fairness is also a key issue. If you've got proactive tech tools which are consistently making incorrect judgments about users, then that's not likely to be treating users’ personal data fairly. Tools must be sufficiently statistically accurate for their for their purposes.
“The final thing that I would mention in terms of data protection with proactive tech approaches, is that there is also the potential that these tools are going to be making solely automated decisions about users which have legal or similarly significant effect on them. That engages article 22 of data protection law [GDPR].
“A lot of those decisions and the impact that they'll have on users are going to be context dependent. But I think services need to be looking at the context in which they might be taking those sorts of decisions and understanding what the impact would be on users because it is going to vary. For example, banning a user on one service might not have that legal significant effect, but banning a user on a different service or a different user would.”