AI is now 'Should we?' not 'Can we?'

Ready access to AI is forcing companies to address ethical questions

AI is now 'Should we?' not 'Can we?'

AI is so ubiquitous today that the real challenge is not practicalities, but ethics.

2023 has been the year of AI, especially generative AI like ChatGPT. The technology has become so commoditised that it's used in industries from travel to consultancy.

"AI is becoming much more pervasive and less limited," said Dan McMahon, head of innovation management at Hymans Robertson, speaking on a panel discussing AI at the IT Leaders Summit this week.

"Copilot being released into the entirety of the Microsoft 365 suite puts [AI] into many more people's hands."

Companies' use of AI differs heavily. Hymans Robertston is using it in its core systems. Bank of America, on the other hand, focuses on heavy data analysis, said director Rahul Mundke.

"We capture pretty much all data coming into the trading platform," which makes it "very difficult" for a person to manage, he said. It's a perfect use case for AI; "finding bottlenecks and new workflows, looking at how to improve the customer experience."

Trust, transparency and ethics

Because AI tools are now so easy to access, companies have to ask and answer important ethical questions when implementing the systems.

"AI needs to explain itself: how it makes a decision, and if that decision is transparent, ethical and trustworthy," said Huseyin Seker, professor of computing sciences at Birmingham City University. "You need to consider how you're developing and deploying it, whether it's rule-based or data-driven, and mostly if you trust it."

That's especially difficult when you have to buy a system in, rather than building it.

"We're a Microsoft house, so we're trialling the OpenAI service," said Dan. "It's more challenging than self-building or getting it off the shelf, as you can't just open the lid and see how it works.

"OpenAI is getting itself into hot water over training data now, so should we use it? If we don't, someone else could beat us to the punch.

"It's the should we, not the can we."

There are still questions to answer about AI and data. For example, what happens if a user withdraws their consent after their data has already been used to train an LLM? Who owns an AI model trained on public data - the company or the data owners?

"Regulation and legislation are massive issues and have not caught up [to AI] yet," said Huseyin. "Who is going to regulate this sector? It's a big question."

And we're still waiting on an answer.