Record numbers of businesses using AI-based systems to hire and fire staff
David Sant, commercial solicitor specialising in data and technology at Harper James Solicitors, warns that employers joining the AI revolution need to be increasingly mindful of the laws surrounding workers’ rights as sign-up sky-rockets during the pandemic as companies look at remote ways to recruit
In a potentially landmark ruling, judges in Amsterdam ruled six people dismissed by Uber should be given their jobs back.
One of the key issues was whether the drivers - five of whom were British - had their contracts terminated automatically, after Uber's AI systems had accused the drivers of engaging in fraudulent activity. Uber has also been forced to pay over €100,471 in damages and a financial penalty for each day it delays in reinstating the drivers affected by the automated dismissals.
Commenting on the ruling, and its potential repercussions, David Sant, commercial solicitor specialising in data and technology at Harper James Solicitors, warned: ‘This case is a high-profile example of a growing issue which now exists in the overlap between employment law, workers' rights and data protection.
"The Uber ruling poses an important question: To what extent can companies rely on automated processing of data to make decisions that affect employees and workers? Under the UK GDPR, it is unlikely that you could ever lawfully make an automated decision to end somebody's employment or their contract as a worker."
This is a question which will likely be asked many more times in the weeks and months ahead because use of AI in employment is predicted to rise still further. Recruitment AI is expected to be a particular growth area. Businesses in the UK first started using facial expression technology alongside artificial intelligence to identify the best candidates in job interviews around December 2019.
Applicants are filmed by phone or laptop while asked a set of job-related questions. AI technology is then used to analyse candidate response in terms of the language, tone and facial expressions. But use has soared since then, particularly during the pandemic which has reduced businesses' abilities to interview applicants face-to-face.
US giant HireVue, one of the leading providers of AI recruitment software, conducted 12 million interviews entirely using automated processes in 2019. And last year it rose to 19 million and experts predict the numbers are only set to go up and up. Although the development of AI is bringing opportunities, it also presents risks, not least when the computer says no to a candidate's application.
‘More and more businesses are now looking to introduce AI-based models to their recruitment and general employment processes," Sant commented. "It brings many potential advantages. But there are important steps anyone considering making this switch needs to keep in mind at all stages."
So what are they?
"Companies should understand that collecting and reviewing data about job applicants and employees involves data processing under the GDPR," Sant added.
"And that means that all of the standard GDPR data protection principles apply. For example, you will need to establish a lawful basis for processing data, collect only the information you require for the purpose, and keep the data confidential and secure and for no longer than is necessary for the purpose. And employers should be aware that the individual's rights under the GDPR (e.g. to request copies of their personal data) apply to job applicants and employees just as they do to customers."
Being open and transparent about how you will handle data is also key.
"Businesses will have to explain to the job applicants what data they are collecting, what they are going to do with it, the purposes of the processing, any third parties involved and other transparency information, for example in a privacy policy."
On the question of whether employers can rely on automated decision making for more efficient recruitment, Sant warned, "The guidance from the Information Commissioner's Office is that unless there is a human involved in the decision-making for every applicant, using AI to filter out applicants is unlikely to be legal. In other words, employers should be considering AI-assisted decision-making, rather than AI-only decision-making."
Many businesses are now also using AI to monitor and track employee performance. But those choosing to do so also have to be mindful of laws surrounding data protection.
"All of the same data protection issues apply to automated performance monitoring as they do for automated recruitment," said Sant.
"As an employer, you must establish a lawful basis for the processing, make sure you have given transparency information to employees and must carry out a Data Protection Impact Assessment."
In recent months there has been a lot of coverage detailing fears that bias can creep into AI decision making.
For example, Amazon withdrew the use of AI in its recruitment process when it was found to favour men. And Twitter came under fire for an algorithm ‘preferring' white faces. The risk of bias is real and can often be because the algorithm has been trained on a set of decisions made by humans, which were themselves biased. So an algorithm might replicate the human bias.
"Biased decisions in recruitment or employment would be breaches of employment law, equalities law and also data protection law," Sant said.
"The GDPR contains various safeguards for individuals to understand how decisions have been made. Individuals can always request transparency information, including information about what data has been collected and processed and for what purposes. In the case of solely automated decision-making, individuals must be informed that you are using their data for solely automated decision-making and they also have a right to meaningful information about the logic and significance of the consequences of the decision. And they will also have a right to request a review of the decision by someone who has the authority to change it."