Nvidia was warned more than three years ago of AI threats to minorities
The chip makers tech is crucial to AI development
Two former employees say that their concerns about the safety of AI being developed for facial-recognition were dismissed by CEO
By almost every measure Nvidia has had a good year. The value of shares in the chip manufacturer have soared along with demand for its graphics-processing units (GPUs) which are vital for the training of AI models. There are only two clouds on the horizon. The first is the potential impact on its revenues from the export ban of advanced chips to China. The second concerns the response of CEO Jensen Huang to concerns about a danger posed by its technology to racial minorities.
Two former Nvidia employees, Masheika Allgood and Alexander Tsado, told Bloomberg that they had met with Huang in 2020 to raise their concerns about safety of the facial-recognition technology being developed by the company for self-driving vehicles. They were so disheartened by Huang's response that they left the company.
Allgood and Tsado both served as presidents of Nvidia's Black employees group and had spent a year collating concerns raised by their colleagues about AI bias.
Tsado, former product-marketing manager at Nvidia, said he wanted to convey the importance of the issue. Whilst the impact of the unintended consequences of AI would ultimately be felt by everyone, it would be felt by already marginalised communities first. Indeed, it already had been. Tsado told Bloomberg that:
"I am a member of the underserved communities, and so there's nothing more important to me than this. We're building these tools and I'm looking at them and I'm thinking, this is not going to work for me because I'm Black.''
Allgood laid all of this out in a LinkedIn post in June 2020, prompted by what she saw as the hypocrisy of Nvidia in promoting itself as an inclusive company during PRIDE 2020.
In the post (and recently to Bloomberg) Allgood says that during the meeting Huang made light of her and Tsado's concerns, replying that self-driving vehicles could be tested on highways where they would pose less of a threat to pedestrians. She was so distressed by the meeting that she left the company shortly afterwards.
In-built bias
Numerous studies have concluded that many AI algorithms are producing racist outcomes. It's widely understood that the biases inherent in the huge datasets that AI models are trained on are the source of the problem, although there seems to be far less agreement on exactly what to do about it. Most chip companies, and tech companies more broadly, have publicly stated that technical teams should be diverse if they are to build technology that works for everyone, but in most cases the reality remains distinctly homogenous.
Nvidia's workforce is less diverse than those of some of its competitors. In 2020, at the time of the meeting, fewer than 1% of the companies employees were Black. That grew to 2.5% in 2021, the most recent year for which data is available. This compares to 13% in the working population of the US. Asians are the largest ethnic group at the company, followed by white employees.
A Nvidia spokesperson told Bloomberg the chipmaker had undertaken substantial work to make its AI products safe and inclusive since Allgood and Tsado left.
The company has set up an AI & Legal Ethics program and created an open-source platform which help chatbots filter out unwanted content to keep them on message. Nvidia is also more transparent, releasing "model cards" with its AI models and collaborating with internal affinity groups to diversify its datasets and test models for biases before release.