Image Courtesy of Flickr.
Severe aortic stenosis (AS) is a common form of valvular heart disease that involves the aortic valve becoming unusually narrow, affecting five percent of people above the age of sixty-five. Early diagnosis is essential to successful intervention. Usually, AS is detected through Doppler echocardiography, or ultrasound imaging of the heart. However, performing Doppler echocardiography requires access to specialized equipment as well as professionals who know how to operate the equipment and interpret the results. This discrepancy between the large population of individuals at risk for AS and the small amount of resources available for its diagnosis makes it difficult to achieve early diagnosis of AS, negatively impacting patient outcomes.
Researchers at the Cardiovascular Data Science (CarDS) Lab at Yale recently published in European Heart Journal a creative new approach to making AS diagnostic tools more accessible—combining deep learning with simple ultrasound scans. Handheld devices that use ultrasound imaging to visualize the heart are much more widely available than the equipment necessary for Doppler echocardiography, but the images and videos alone produced by these ultrasound scans are difficult to use to diagnose AS. “Patients are often not seen by a cardiologist until they are very late in their disease stage,” Evangelos Oikonomou, a postdoctoral fellow in the CarDS Lab, said. “There’s a big opportunity to diagnose the disease earlier in this patient population.”
The researchers at the CarDS Lab developed a novel deep learning model that is capable of using 2D echocardiograms, which are produced by simple ultrasound imaging, to identify AS without specialized Doppler equipment. Deep learning is a kind of machine learning that employs computer networks built to resemble human neural networks—in short, it teaches computers how to learn like humans.
“You train the algorithm by showing it multiple different images and giving feedback to the algorithm as to whether its prediction [about what the image is] is correct or wrong,” Oikonomou said. “What the algorithm does is every time it gets [its prediction] wrong, it tries to adjust its approach and learn something from its errors.” These deep learning algorithms are often more perceptive to patterns than humans, allowing them to reach conclusions that might not be apparent to a doctor trying to interpret ultrasound images. “That’s where the performance of an AI algorithm may actually exceed that of a human operator,” Oikonomou said.
To develop their algorithm, the researchers needed to train it to be able to recognize severe AS. To do this, they sourced a massive amount of 2D cardiac ultrasound videos from patients in the Yale New Haven Health system with no AS, non-severe AS, and severe AS. Using this dataset, the algorithm learned how to identify specific phenomena in the videos associated with each class of AS diagnosis. Once the researchers trained the algorithm to learn what to look for, they had to validate that the algorithm was truly capable of differentiating non-AS, non-severe AS, and severe AS ultrasound videos. To prove the algorithm’s success, they had it sort a new dataset from different patients in New England and California. The deep learning algorithm proved highly accurate in sorting the videos across all patient datasets.
The researchers’ vision is that their algorithm can be used by any medical provider with a simple ultrasound scanner to catch AS early. This removes the existing barriers to AS diagnosis, like specialized Doppler echocardiography equipment and the training of medical providers to accurately interpret results, making AS diagnoses more accessible to patients and simpler for providers. If the algorithm is widely used, it could be a major step forward for successful AS intervention. “Hopefully, we can make this as cost-efficient as possible,” Oikonomou said. “It’s very easy to do—it takes two or three minutes, and people can probably be screened once in their lifetime.”
Beyond its immediate impact in improving outcomes for AS patients, this deep learning algorithm reveals the broader potential of applying cutting-edge computer science to healthcare. “I think this could be applied to other things such as hypertrophic cardiomyopathy, which is a genetic heart condition that is very common but most people don’t ever get diagnosed,” Oikonomou said.
With increasingly high patient burdens and medical staff stretched thin, it’s inevitable that some patients will slip through the cracks of the healthcare system. Machine and deep learning models could be used across a variety of applications to identify diagnoses that are sometimes missed by medical staff. The CarDS Lab’s algorithm is proof of the great positive impact that computer science and artificial intelligence stand to have on patient care and outcomes.