Microsoft partners with StopNCII to scrub revenge porn from Bing
But deepfakes remain a challenge
Microsoft has announced a partnership with StopNCII to proactively remove harmful intimate images and videos from its Bing search engine.
The aim is to combat the spread of revenge porn and protect user privacy, according to the company.
The surge in deepfakes – AI-generated content that creates highly realistic but entirely synthetic images and videos – has presented a new frontier of abuse. Opportunities to generate revenge porn have been increased by AI advancements, allowing perpetrators to generate fake images that resemble real people.
Victims, most of whom are women, find themselves at the mercy of Big Tech platforms, often with little recourse for removing these images from the web.
"Since 2015, Microsoft has recognized the very real reputational, emotional, and other devastating impacts that arise when intimate imagery of a person is shared online without their consent. However, this challenge has only become more serious and more complex over time, as technology has enabled the creation of increasingly realistic synthetic or "deepfake" imagery, including video," Courtney Gregoire, Chief Digital Safety Officer at Microsoft, stated in a blog post.
To protect victims of revenge porn and deepfake exploitation, Microsoft has now partnered with StopNCII (Stop Non-Consensual Intimate Images), an organisation that offers a digital solution for those targeted by revenge porn.
The initiative allows victims to create a unique digital fingerprint, known as a "hash," for each explicit image.
StopNCII's database then stores these hashes, which are used by StopNCII's partners, including Facebook, Instagram, TikTok, Snapchat, Reddit, Threads, Pornhub, and OnlyFans, to identify and remove the content from their platforms.
The new collaboration between Microsoft and StopNCII builds upon Microsoft's existing PhotoDNA technology, which StopNCII integrated back in March.
PhotoDNA further strengthens the process by generating additional digital fingerprints from identified harmful images, enabling easier detection and removal on partner platforms.
Microsoft says it has already taken action on 268,000 explicit images that were flagged through a pilot programme with StopNCII. This pilot ran through August, showcasing the significant impact of proactive image scrubbing.
Microsoft previously offered a user reporting tool for victims to flag explicit images, but the company acknowledged that this method alone was not enough to deal with the sheer volume of content.
"We will continue to remove content reported directly to us on a global basis, as well as where violative content is flagged to us by NGOs and other partners. In search, we also continue to take a range of measures to demote low quality content and to elevate authoritative sources, while considering how we can further evolve our approach in response to expert and external feedback," Gregoire noted.
For deepfakes and other non-hashed content, Microsoft encourages victims to directly report the images on the company's Report a Concern page.
Users can also report such content to Google and other online platforms who offer similar removal mechanisms. While Google hasn't formally joined the StopNCII initiative, it does provide separate guidelines and methods for users to request removal of unwanted intimate images from its search results.