SAS: If we can't bridge the AI trust gap, we're going nowhere
Josefin Rosén, principal trustworthy AI specialist, on the importance of clarity
The ideal of trustworthy AI is one that few would argue with. Who wants untrustworthy AI after all?
But getting there is another matter, and in the meantime there's an interesting dynamic at play. On the one hand, there's the opportunity, or perhaps the FOMO. Few organisations have clear-cut use cases for AI yet, but they don't want to be left behind once those emerge and are adopting the technology at many levels. On the other hand, it's fast-moving, immature, hype-driven tech with risks almost as big as the potential rewards
And let's face it there's a lot that could go wrong. As we all know by now, in its generative form AI is prone to hallucinations. It produces plausible-looking that's not necessarily based in fact. What's more, it is still extremely difficult to work out why deep learning models produce the output that they do, even when they are open source. At the same time, the race for market share is incentivising AI companies to be less transparent rather than more so. It's not yet something you'd want to bet the farm on.
Then there are the many AI regulations which are arising around the world, including the EU AI Act which comes into force in July. Last year, a study by Stanford University found that none of the major LLM foundation models complied with the then-draft EU AI Act.
Josefin Rosén, principal trustworthy AI specialist at analytics vendor SAS, says that while challenging for vendors, the emerging regulations are creating a market for trustworthy AI that will allow organisations to experiment with confidence.
There's much more activity in the spheres of ethical AI than headlines about job cuts would lead us to believe, she said.
"You hear about ethics teams being laid off, but you don't hear about those who are still there or who are being hired."
The AI trust gap
"Case in point, SAS has a global data ethics practice coordinating the company's trustworthy AI efforts globally as well as advising on internal research and development, and it recently appointed a global head of AI governance, who will be leading a new offering of AI governance services directly to customers. The company also has an inter-disciplinary executive oversight committee that meets regularly with senior management "to discuss ethical dilemmas".
"We believe this is core to being able to breach this trust gap, with being able to realise the value potential of AI that's being inhibited by a lack of trust from the citizens," said Rosén. "If we can't bridge that gap and then nothing is going to happen, we're going nowhere."
SAS's strategies are also influenced by the emerging regulations.
"With the EU AI Act now regulating LLMs, we're not going into that space," she said. "There is no full transparency yet in large language models."
Instead, SAS will leave that to partner Microsoft, whose LLMs are among many that integrate into its Viya AI platform.
With various AI regulations are being introduced globally, compliance could be a challenge for multinational organisations due to the divergences in these rules and a lack of ready expertise. Therefore software providers will inevitably play a significant role in ensuring compliance for customers, in a similar way that cloud providers ensure their services adhere to rules across different jurisdictions.
As one such company, SAS recently announced the release of model cards, as "nutrition labels" for AI models keeping organisations informed about the health of the models in terms of accuracy, model drift and fairness.
As for developers of the models, the decades-old friction between AI devs, who want to be left in peace to solve technical problems, and ethicists, who want to ensure their creations do no harm, is starting to lessen, Rosén said. Developers are becoming more aware of the importance of trustworthiness, in part because of regulation but also because of public scepticism and even fear of their products.
Just as DevSecOps introduces security into code at the start rather leaving it to the end, trust needs to be embedded into AI models at all stages of the product lifecycle. It's an idea that's gaining traction.
"We need a very clear explanation about who's responsible for the model, what is the data and what are the privacy risks that we have in the data right now?" Rosén said. "What are the sensitive variables, and what are we doing about it? We need to be able to map that throughout the whole process? It's really important."
There is obviously some way to go in this transparency drive, as the recent open letter by AI employees demanding the right to raise their concerns shows, but ultimately AI companies and those who deploy their products will need to build and maintain trust if they are to have a future.