DeepMind's neural networks can analyse eye scans for disease in seconds
Trained on curated data from Moorfields Eye Hospital, the neural network also shows clinicians how it reached its judgement
Google DeepMind has built an AI system that is as good as the best human experts at detecting eye problems, while bypassing the ‘black box' of an unclear decision-making process.
The system was put together in a partnership with Moorfields Eye Hospital.
Eyecare professionals today use optical coherence tomography (OCT) scans to help diagnose patients. These produce 3D images that are difficult to interpret without training. Because they take so long to go over, there can be significant delays between the scan and treatment.
The DeepMind system reads these scans much more quickly. The firm says that it can detect ‘the features of eye diseases' in seconds, as well as prioritising patients who need urgent care.
The challenge of using such a system in healthcare is that existing artificial intelligence techniques give no insight into their decision-making: you put the data in at one end and the answer comes out of the other. This is referred as the AI ‘black box', and it is a significant block to clinical use of such systems.
DeepMind has combined two different neural networks to get around the issue. The segmentation network analyses the OCT scan to create a map of the eye and any damage, which professionals can use to see what the system is ‘thinking'. Meanwhile, the classification network analyses the map to present diagnoses and referral recommendations.
Importantly, the system delivers its recommendations as a percentage. Clinicians can use this to judge the AI's confidence in its plans.
There are still strict clinical trials and regulatory approval for the system to get past before it can be used in practice. If that does happen, though, eyecare could use it worldwide on many different types of eye scanner; not only the one that it was trained on at Moorfields.