Scraping web data is an easy way to train AI models, but content owners object to the practice. With no legislative end in sight, Reddit's deal with OpenAI marks a way forward, says Travers Smith's James Longster.
Generative AI systems need huge volumes of data to train the large language models underpinning their offerings. This isn't news to anyone with an understanding of the technology, but a critical qu...
To continue reading this article...
Join Computing
- Unlimited access to real-time news, analysis and opinion from the technology industry
- Receive important and breaking news in our daily newsletter
- Be the first to hear about our events and awards programmes
- Join live member only interviews with IT leaders at the ‘IT Lounge’; your chance to ask your burning tech questions and have them answered
- Access to the Computing Delta hub providing market intelligence and research
- Receive our members-only newsletter with exclusive opinion pieces from senior IT Leaders