The UK's new Artificial Intelligence Safety Institute (AISI) has discovered vulnerabilities in Large Language Models (LLMs), which are behind the surge in generative AI tools.
The Institute's initial findings highlight potential risks associated with these powerful tools. The research shows that LLMs can deceive human users and perpetuate biased outcomes, raising alar...
To continue reading this article...
Join Computing
- Unlimited access to real-time news, analysis and opinion from the technology industry
- Receive important and breaking news in our daily newsletter
- Be the first to hear about our events and awards programmes
- Join live member only interviews with IT leaders at the ‘IT Lounge’; your chance to ask your burning tech questions and have them answered
- Access to the Computing Delta hub providing market intelligence and research
- Receive our members-only newsletter with exclusive opinion pieces from senior IT Leaders