Public view of AI as 'scary' and 'worrying' shows need for more engagement by policymakers and business, BSA
People see the potential, but they have 'a lot of worries' over who controls it and who benefits, says British Science Association head
According to a recent survey by 3M, 73% of people in the UK believe AI will bring significant change. However, the most common sentiments associated with the technology are "scary" and "worrying", followed by "unsure."
This highlights the need for transparent communication and public education to address these concerns, said Hannah Russell, CEO of The British Science Association (BSA) the charity, which has a remit to ensure that all of society is included in science.
"It shows we've got a job to do around public engagement with AI."
Speaking at a recent event organised by 3M and the BSA and hosted by accelerator Digital Catapult, Russell said there is an imperative to address public concerns and build trust, especially considering the potential impact of AI on people's lives and livelihoods.
"People can definitely see benefits in using AI to support public services, but there are lots of things that people are worried about. They're worried about losing their jobs, they're worried about diversity within the data and the security of the data. We can't ignore those things when we think about potentially exciting developments."
Russell continued: "Consistently, it doesn't matter what the emerging technology is, the areas that people want to know about are who governs the technology, who benefits from the technology, and is the technology safe and secure? And people have worries on all of those counts with AI."
An economic imperative
Engaging the public in discussions about AI is not just the "right thing to do" but also an economic imperative, said Russell. And it's vital to involve the 27% who have no interest in AI or believe irrelevant to their lives. The fact that policy makers, developers and proponents of AI comprise an unrepresentative sliver of the public is a problem that will likely exacerbate existing imbalances to the detriment of society as a whole unless regulation is carefully crafted in a spirit of transparency and inclusion.
"When you bring together the public with researchers. That's when the magic happens,” Russell said.
“Communities can often bring in insights into things that researchers just haven't thought about. That's going to lead to regulations that are more inclusive and durable, and without stifling innovation."
A perception problem
The other gap that needs bridging if AI is to benefit the many and not just the few, and part of the economic imperative for inclusion, is in education, from where the technologists of the future will emerge. Here, too, there is a lot to be done.
"Teachers have an enormous number of barriers to face at a primary level. Ninety-five percent of primary teachers don't have a background in science, and that makes it quite challenging. There's a lot of content to get through. Confidence is an issue, time is an issue, resources are an issue."
The knock-on effect of these issues is a devaluation of STEM subjects, Russell said: "Only 8% of young people aged 14 to 18 can think of as scientist who looks like them. That's a massive problem for us."
To address issues of perception, access and inclusion, the BSA runs initiatives such as the annual British Science Week, and Sciencewise, a public engagement programme to inform policies involving science and technology.
But a news story published today shows how much of a mountain there is to climb on transparent use of AI. The Guardian found that only nine algorithmic systems have so far been submitted to a mandatory public register, "with none of a growing number of AI programs used in the welfare system, by the Home Office or by the police among them."