Microsoft has released the source code for new tools and datasets to audit AI-powered content moderation systems.
Large language models (LLMs) are a popular way to train AI systems, but are not without risks. As they're trained on enormous volumes of data from the internet, they have the propensity to 'learn' ...
To continue reading this article...
Join Computing
- Unlimited access to real-time news, analysis and opinion from the technology industry
- Receive important and breaking news in our daily newsletter
- Be the first to hear about our events and awards programmes
- Join live member only interviews with IT leaders at the ‘IT Lounge’; your chance to ask your burning tech questions and have them answered
- Access to the Computing Delta hub providing market intelligence and research
- Receive our members-only newsletter with exclusive opinion pieces from senior IT Leaders