AI adoption begins with regulation

Panellists agree: Data governance is the key to success

Panellists at Computing’s IT Leaders Summit argued that managing risk is the key to innovation.

Recognising that we live in a biased society – which produces biased data – is the first step towards using artificial intelligence responsibly, said panellists at Computing’s IT Leaders Summit last week.

Often, the bias isn’t purposeful, but poor-quality data can make all the difference between a successful application and one that crashes and burns.

Robust data governance is important to tackle these challenges.

“You want decision intelligence," said Heather Dawe, chief data scientist and head of responsible AI at UST. “To do that, you need data that is as good as you can get it.”

UST’s approach is to start with regulations – specifically the European Union’s AI Act, which takes a risk-based approach. Applications classed as low-risk have little or no regulation, while high-risk uses are subject to greater compliance.

That’s all well and good for large firms that can afford to hire consultants to make sure they’re following the rules, but what about smaller companies with smaller budgets?

Alina Timofeeva, a freelance strategic advisor, said she “would advocate for BCS” to sit between government, regulators and businesses and help all sides, although this is not what the Society currently does.

In the meantime, she said, smaller organisations “need to figure out what you’re using AI for” - for example, is it a critical service? If it is, be aware that, like any system, “AI can fail” and prepare for that situation in terms of both your internal processes and your customers.

Managing AI risk in this way is critical to supporting innovation, said Heather, which is essential to succeed in a difficult market.

“I’m a believer you can do both [innovation and taking risks]. Managing AI risk gives you a space to innovate safely. If you do, the innovation opportunities are significant.

“Through GDPR and by dint of not innovating, I’ve seen large enterprises paralysed by their data governance functions. That doesn’t need to happen.”

The panellists described some of AI’s most common risks. Biased datasets were the starting point, but they also include machine learning drift, which can make models less accurate over time, and hallucinations – which generative AI has “escalated at scale.”

Heather and Alina agreed that the answer is to embed QA in your automated processes, and to adopt an AI framework as an arbiter of your adoption.

While no industry standard exists today, plenty of organisations are working on what they hope will become the de facto approach. Watch this space.