BBC to explore generative AI in journalism
But it will prevent data scraping by OpenAI
The BBC, the UK's largest news organisation, has announced its intention to explore the application of generative AI in journalism, but it will not allow OpenAI to scrape its content.
In a blog post published on Thursday, Rhodri Talfan Davies, director of nations at the BBC, unveiled the BBC's guiding principles for exploring the potential of generative AI across various domains, including journalism, archiving and personalised experiences.
According to Davies, the technology offers opportunities to enhance the value delivered to both its audiences and society as a whole.
"Innovation has always been at the heart of the BBC. From the very first radio broadcasts in 1922 to colour television in the 1960s and the rapid development of our online and mobile services over the last 25 years - innovation has driven the evolution of the BBC at every step," Davies said.
"We believe Gen AI could provide a significant opportunity for the BBC to deepen and amplify our mission, enabling us to deliver more value to our audiences and to society. It also has the potential to help our teams to work more effectively and efficiently across a broad range of areas, including production workflows and our back-office."
Over the next few months, the BBC intends to experiment with generative AI in diverse areas, including "journalism research and production, content discovery and archive, and personalised experiences."
The BBC has also committed to collaborating with technology firms, fellow media entities, and regulatory bodies to ensure the responsible and secure development of generative AI, with a primary emphasis on upholding trust in the media.
The blog post also outlines three principles that it says will guide the BBC's approach to working with generative AI.
- The BBC will consistently act in the best interests of the public.
- The organisation will prioritise talent and creativity while respecting the rights of artists.
- The BBC will maintain a commitment to openness and transparency regarding AI-generated content.
Beeb's AI crawler ban
While the BBC explores applications of generative AI, it has taken measures to prevent web crawlers from organisations like OpenAI and Common Crawl from accessing its websites.
This decision aligns the BBC with other prominent news organisations such as CNN, The New York Times and Reuters, who have also implemented measures to block web crawlers from accessing their copyrighted content.
The BBC says that the unauthorised scraping of its data for training generative AI models does not serve the public interest.
It says it is seeking to establish a more organised and sustainable approach through collaborative discussions with technology companies to address this issue.
"That's why we have taken steps to prevent web crawlers like those from Open AI and Common Crawl from accessing BBC websites," Davies stated.
The BBC is also examining the potential impact of generative AI on the broader media industry.
"For example, how the inclusion of Gen AI in search engines could impact how traffic flows to websites, or how the use of Gen AI by others could lead to greater disinformation," Davies explained.
"Throughout this work, we will harness the world class expertise and experience we have across the organisation, particularly in BBC R&D and our Product teams who are already exploring the opportunities for public media."