Much of what's being sold as 'AI' today is snake oil, says Princeton professor

AI companies have raised millions of dollars in funding from investors - but their technology isn't really artificial intelligence

Most of the products or applications being sold today as artificial intelligence (AI) are little more than "snake oil", Arvind Narayanan, an associate professor at Princeton professor, has warned.

The term snake oil is, today, a euphemism for misleading marketing. The term is used to describe any useless medical remedy hawked by quacks as a cure-all for a wide-range of ailments.

In the 19th century, 'snake oil' was commonly sold across the US as a panacea, although most of those products had no medicinal value and didn't even contain any actual snake oil.

The same deceptive tactics have sneaked into the area of AI marketing in 21st century, experts believe.

"Much of what's being sold as "AI" today is snake oil. It does not and cannot work," says Narayanan.

In a lecture at MIT earlier this week, Narayanan described the reasons why this deceptive marketing of AI is happening and how to recognise flawed AI claims.

According to Narayanan, many companies have been using what has been claimed are AI-based assessment systems to screen applicants. And the majority claim to work, not by analysing what the candidate said or wrote in their CV, but by speech pattern and body language.

"Common sense tells you this isn't possible, and AI experts would agree," said Narayanan.

Many companies claiming to be in artificial intelligence have raised millions of dollars in funding from investors and are going after clients aggressively, he continued.

But a recent study by London Venture Capital firm MMC found that a large number of European start-ups advertising themselves as 'AI companies' actually don't use any AI in their applications at all. And when the technology is deployed, it is often done in highly predictable ways.

Most people also confuse chatbots, fraud detection, and business process automation systems with AI, thereby adding to the problem.

According to Narayanan, content identification, facial recognition, medical diagnosis from scans, speech-to-text and deepfakes are just some of the areas where AI has experienced genuine and rapid technology progress.

In addition, spam detection, detection of copyrighted material, automated essay grading, so-called hate-speech detection and content recommendation are the areas where AI is far from perfect, but improving.

The areas where AI use will remain fundamentally dubious include predicting criminal recidivism, forecasting job performance, predictive policing and predicting terrorist risk, said Narayanan.