AI doesn't care what you think
Want to understand hallucinations? Look at your family
There seem to be two key moments when I really understand something. The first is when I build ‘that’ something, and the second is when I teach the subject. So, I sometimes experience a stinger question from a student that makes me think through my understanding and generally realise a greater clarity.
Most recently, I was asked how an AI could hallucinate, invent facts, and deliver erroneous results. After all, these are descriptors of human traits that require a priori experiences. There is no stock answer to this question, and I started with a stark statement: "The single biggest error of the AI community is the focus on human abilities as their reference datum".
In my view it is an act of pure arrogance, and it completely misses the point, there are many forms of biological intelligence, and some are remarkably crude!
"Who is the smartest; a naked human swimmer mid-Atlantic with a shark? Intelligence is ‘speciated' and defined by environmental conditions"
Mother nature did not start life on Earth by first, creating incredibly complex organisms, such as humans and high order animals, to then work backwards towards molecular life, quite the reverse! One hypothesis assumes relatively crude molecular strings of mRNA produced short-lived (minutes to days) protein chains and elemental cellular life such as bacteria and fungi that exhibit a basic intelligence. Later, the evolutionary process saw DNA emerge to deliver more sophistication, and longevity.
All life on earth is of the same DNA and RNA chemical constructs that exhibit some level of intelligence. In physical terms we might state this follows a string of emergent properties (=>) stemming from our chaotic universe of star dust:
Complexity => Clustering in chaos
Life => Complexity
Intelligence => Life
Sentience => Intelligence
We also know that Mother Nature and evolution never optimise anything, only humans do that! Nature goes for ‘near enough is good enough' on the basis of survival of the most adaptable.
So, biological brains always have degrees of redundancy and never rest. During sleep/rest periods biology appears to 'effectively defrag' and re-order memories and information whilst progressively reducing the detail. In humans, this sees a migration from colour to greyscale and a gradual degrading of accuracy (Hebbian Decay). Evidence for this is based on recall and perception experiments along with MRI scans.
The recent realisation of massive AI systems (ChatGPT et al.) sees billions or trillions of parameters that are far from any computational optimum. This apparent extravagance has been made possible by advances in IC transistor density. Up to this point, engineers had tried to optimise AI hardware and software for computational efficiency, which effectively kills any prospect of creativity!
But the new generation of AI is creative by virtue of redundancy and a lack of optimisation. Furthermore, it too never sleeps, and in effect, it has developed a "mind" of its own, ticking away in the background working on the residue of past data and operations.
So, here come dreams, hallucinations, the creation of data, including erroneous results on the basis of no basic knowledge. Humans, horses, dogs and cats are all members of a biological class that do this and it appears that our latest AI configurations have reached a point of similar independent action and creativity.
So there are now only two big questions: What can these machines do better than us? And; are the AI dreams and hallucinations useful or useless? As for the critics who say that we don't understand how AI works and we can't test it, they need to take a look at their colleagues, friends and family!
Clearly, AI doesn't care what we think.
Peter Cochrane OBE, DSc, University of Hertfordshire