Every week, Moorfields Eye Hospital in London performs 3,000 optical coherence tomography scans to diagnose vision problems. The scans, which use scattered light to create high-resolution 3-D images of the retina, produce large quantities of data. Analyzing that data is a slow process. Understanding the images requires trained and experienced human eyes to identify problems specific to each case, leaving little or no time to identify broader, population-wide trends that could make early detection easier.
That’s just the kind of task that artificial intelligence can be used to tackle, though. So it’s perhaps not surprising that Google’s AI wing, DeepMind, has decided to partner with the hospital to apply machine learning to the problem as part of its health program. The arrangement will see DeepMind’s software study over a million eye scans—both optical coherence and more conventional images of the retina—in order to establish what happens in the eye during the early stages of eye disease.