EU set to ban AI use for 'indiscriminate surveillance'
The draft proposal would also ban algorithms that judge people's trustworthiness based on their social behaviour
The European Commission is poised to outlaw the use of AI systems for ' indiscriminate surveillance ', or for ranking peoples' social behaviour.
The new rules are contained in a draft regulation set to be proposed by the European Commission, the executive body of the bloc, in the coming weeks.
According to a copy of the draft [pdf], organisations operating in the EU will not be allowed to use AI systems that track individuals in physical environments or aggregate data from other sources.
Algorithms that judge people ' s trustworthiness based on their social behaviour or predicted personality traits will also be banned.
The draft also proposes to ban systems that are deployed to exploit information about groups of people.
Under the rules, special authorisation will be required for the use of biometric identification systems in public spaces.
High-risk AI applications will undergo a detailed review before organisations are allowed to use them.
Examples of high-risk AI include recruitment algorithms, crime-predicting algorithms, systems deployed for evaluating credit worthiness, and those used for establishing priority in the dispatching of emergency services. They also include systems deployed as safety components in essential public infrastructure networks, including roads and the supply of electricity, water, and gas.
In such cases, member states will appoint assessment bodies to ensure that such systems are trained on unbiased data sets and have adequate human oversight.
The companies that violate the rules will face a range of punishments, including fines of up to four per cent of their global revenue - the same as the maximum penalty for violating the GDPR.
There are some exemptions in place, however, including the use of AI exclusively for military purposes, as well as for safeguarding public security. This may continue to protect the use of AI in mas surveillance by law enforcement agencies.
"Some of the uses and applications of artificial intelligence may generate risks and cause harm to interests and rights that are protected by Union law," the legislation ' s authors wrote in the draft.
"Such harm might be material or immaterial, insofar as it relates to the safety and health of persons, their property or other individual fundamental rights and interests protected by Union law."
In addition to setting rules to govern AI use, the draft also proposes creating a European Artificial Intelligence Board to help the commission decide which AI systems count as 'high-risk'.
The Board would feature a representative from each EU member, a representative of the European Commission, as well as European Data Protection Supervisor, and will recommend changes to prohibitions.
Commenting on draft regulation, European policy analyst Daniel Leufer told the BBC that the definitions were very open to interpretation, and that the proposals should be "expanded to include all public sector AI systems, regardless of their assigned risk level".
"This is because people typically do not have a choice about whether or not to interact with an AI system in the public sector," he added.