Skip Navigation

We are vaccinating patients ages 12+. Learn more:

Vaccines | Testing | Patient Care | Visitor Guidelines | Coronavirus | Self-Checker |  Email Alerts

 

New AI Approach in Retinal Diagnostics Targets Inclusivity for Patients with Sight-Stealing Diseases

New AI Approach in Retinal Diagnostics Targets Inclusivity for Patients with Sight-Stealing Diseases

Diabetic retinopathy is the leading cause of preventable blindness in working-age adults. More than 2 in 5 Americans with diabetes have some stage of diabetic retinopathy, yet there are usually no symptoms in its early stages, when careful monitoring, and sometimes treatments, may be critical. While ophthalmologists can readily diagnose the condition in the clinic by examining the retina, the sheer number of people with diabetes — more than 34 million in the U.S. and rising — makes periodic screening of everyone with diabetes by ophthalmologists prohibitive.

Enter artificial intelligence (AI). At its simplest, AI employs algorithms to allow computers to mimic human intelligence — for example, pattern recognition as used by ophthalmologists to recognize whether a patient has diabetic retinopathy. As AI technology has advanced, ever more elaborate programs have allowed corresponding advances in medical diagnostics. In recent years, AI has gained traction as a means of screening for everything from suspicious moles to fractures. In 2017, Neil Bressler, the James P. Gills Professor of Ophthalmology at Wilmer Eye Institute, Johns Hopkins Medicine, published a study in collaboration with AI scientists from the Johns Hopkins University Applied Physics Laboratory demonstrating the ability of a form of AI known as deep learning to accurately detect another sight-stealing disease: age-related macular degeneration.

As Bressler explains, deep learning is based on artificial neural networks inspired by the way the human brain works. “You give the computer some data on one end, and an answer on the other, and you program the computer to go through millions of iterations until it creates a way of looking at those data and coming up with the right answer.” In the case of retinal diagnostics, those data take the form of tens of thousands of images that have been evaluated by an expert and labeled, in this case, as various stages of “macular degeneration” or “not macular degeneration.”

With this method, an optometrist, internist — or even, theoretically, the local pharmacist — could take an image of a patient’s eye with a specialized camera, and if the computer identifies the image as “macular degeneration” or “diabetic retinopathy,” the patient could then be referred to an ophthalmologist for evaluation and treatment if indicated. The problem, says Bressler, is that for many less-common retinal diseases, tens of thousands of images don’t exist. And even for retinal diseases for which there are numerous images, there may be an insufficient number of representative images of certain demographic groups.

For their 2017 study, for example, Bressler and his team relied on a large cache of images from the National Eye Institute’s Age-Related Eye Disease Study (AREDS), which began in the 1990s as a means of assessing age-related macular degeneration and cataract in people ages 55–80. But what about people who are older than that? What if their eyes have different characteristics on imaging — characteristics that could cause an otherwise trained computer to miss a diagnosis?

Retinal images may vary due to differences caused by age, sex or race/ethnicity. Underrepresentation of such groups in the training data can lead to bias against accurate diagnoses. While efforts are underway to ensure adequate representation of demographic groups in research studies, it can be challenging to obtain sufficient representative data on which to train a computer. For example, age-related macular degeneration is less common in people of African descent, who were underrepresented in the AREDS study. But people of African descent still develop the disease and can benefit from screening.

To address the problem, in 2019, Bressler joined forces with colleagues from the Johns Hopkins University Applied Physics Lab and Malone Center for Engineering to investigate a modified deep-learning approach that uses fewer training images — substantially fewer. Instead, “low-shot” learning, initially introduced by Google DeepMind in 2016, relies on a vast number of complex algorithms that allow a computer to discover patterns in order to make predictions based on relatively few images. To determine whether low-shot learning could overcome the problem of AI bias in retinal diagnostics and rare ophthalmic diseases, Bressler’s team decided to pose a simple question: Is it diabetic retinopathy, or is it not diabetic retinopathy?

Bressler and his team spent the next year evaluating algorithms and comparing methods before ultimately determining that low-shot deep learning does, in fact, have the potential to overcome limitations imposed by a low number of training images in retinal diagnostics. Results of their study were published in 2020 in JAMA Ophthalmology.

Bressler, who next plans to test the technique in a clinical setting with Wilmer colleagues, including T. Y. Alvin Liu, says it may also be valuable in evaluating whether a small number of patients may benefit from a treatment or not. “Maybe a clinical trial tested hundreds of people to show that a certain medication works, but the outcome was especially good in this group of people with good vision. Right now, I know based on your vision how you’re going to do, on average, and that you’re more likely to end up with good vision if your starting vision is really good. But there are a few people whose vision starts good who don’t do well. Can I identify those individuals by training a computer based on low-shot? And then as I bring in new people, I could say, ‘Your vision’s good, you should do well,’ but by training the computer on low-shot, maybe I can get even more information for them, or come up with ways to increase the number of people who will end up with good vision.”

Low-shot could also inform treatment for those whose vision is not so good in such a scenario. Based on the current science, they’re not likely to end up with as good vision, on average, with current treatments, yet Bressler says that some of them do. “It’s a minority who end up with excellent vision, but what if I could take that minority who had bad vision when I entered them into the trial, and now that I know what their outcome was, take those known images and train the computer with low-shot? I could then bring in people who have not-so-great vision and say ‘Well, you’re not likely to get better, but I just did this analysis and it seems to suggest you have a better chance than others, so I think we want to be really aggressive in getting you this treatment.’ I think that’s where we’re headed,” says Bressler. “It may even help us identify how to target individuals more efficiently for future clinical trials.”

back to top button