Using bug bounties to spot bias in AI algorithms

Bug bounties, or paying ethical hackers to find software flaws, are common in the infosec space. Can the same be applied to AI?

With AI systems growing in sophistication and becoming more widespread, concerns over algorithmic bias - which could result in discrimination against specific groups - have grown in recent years.

Last year, leading AI researchers came up with an idea of making AI more reliable, by offering monetary rewards to users in exchange for spotting bias.

The concept was inspired by bug bounty programmes in the information security field, which encourage software developers and ethical hackers to find security vulnerabilities in software.

Deborah Raji, a research fellow in algorithmic harms for the Mozilla Foundation, is now carrying out a study with advocacy group Algorithmic Justice League (AJL), to determine whether it is possible to apply the same bug bounty models to algorithmic harm detection. If it is, they ask, what would be the challenges in creating such programmes for researchers?

"When you release software, and there is some kind of vulnerability that makes the software liable to hacking, the information security community has developed a bunch of different tools that they can use to hunt for these bugs," Raji told ZDNet.

"Those are concerns that we can see parallels to with respect to bias issues in algorithms."

In 2019, Noel Sharkey, an expert in the field of artificial intelligence (AI), urged the US government to ban the use of all decision algorithms that impact on peoples' lives. Sharkey said that AI decision-making machines should be tested in the same way as any new pharmaceutical drug is before it is allowed on to the market.

All leading tech firms, such as Microsoft and Google, are aware of and working on the bias problem, said Sharkey, but none has so far come up with a solution.

There are many challenges in uncovering algorithmic harms; for example, defining the rules and standards that researchers would need to adhere to while digging out bias.

Another challenge could be a conflict of interests between AI researchers and tech firms. Spotting bias in algorithms could lead to having to redesign the software development process behind a product, or even withdrawing it from the market entirely. Companies are sure to push against such extreme measures.

Raji recalled the response from Amazon representatives when their Rekognition facial recognition software was audited in a study that concluded the system exhibited racial and gender bias.

"It was a huge battle, they were incredibly hostile and defensive in their response," she said.

There are other instances where the group or population affected by algorithmic bias are not paying customers, meaning that tech firms are unlikely to receive a financial benefit from fixing their software.

Raji believes that government regulations or extreme public pressure are the only ways to force software firms to launch bias bounty programmes.