Ongoing Projects by May 2025
We are developing an AI model to detect thyroid eye disease (TED) from facial images using federated learning, a cutting edge approach that addresses key challenges in data privacy when working with sensitive patient data.
In collaboration with Vanderbilt and Stanford, we are implementing federated learning to train our model across multiple institutions without sharing patient data. This method enhances model robustness and generalizability by incorporating diverse, real-world data while preserving patient privacy.
By enabling earlier and more accurate TED detection, our work aims to facilitate timely referrals to oculoplastics, preventing complications and irreversible ocular damage.
Portable Optical Coherence Tomography (pOCTs) significantly reduces the cost and increases accessibility compared to commercial OCT. Though pOCTs’ image resolution is sacrificed, we develop generative AI models to do super-resolution. Specifically, we used IMFusion software to register and denoise (preprocessing), and implemented OCTDiff, a bridged diffusion model to achieve image-to-image translation from low-res to high-res.
The downstream disease classification results using OCTDiff-generated images further prove the clinical utility of our AI-powered tool for portable OCT's application. With the successful development of AI model, we will integrate this AI-package onto physical devices for improved healthcare in clinics.
Glaucoma is a leading cause of irreversible blindness worldwide, with early diagnosis being critical to prevent significant vision loss. Even experienced ophthalmologists may face challenges with diagnosis due to the variability in clinical presentation and the need for comprehensive testing.
Our model, GlaucomaNet50, was designed to detect glaucoma from retinal images and has demonstrated exceptional performance on retrospective datasets, achieving test accuracies averaging around 99%. This pilot study aims to prospectively test our AI model in the Columbia Ophthalmology clinic without directly impacting patient care, generating performance metrics that better reflect real-world variability.
In addition to assessing the performance of the AI model, this study will survey physicians on their perceptions of the AI model, its usability, and its potential impact on clinical workflows. This feedback will be instrumental in refining the model and ensuring its deployment in clinical practice is both effective and aligned with the needs of healthcare providers.
This project explores the use of machine learning and eye-tracking technology for early detection of amblyopia (lazy eye) in children aged 4-7. Amblyopia is a common visual disorder that can lead to long-term vision impairment if not treated early, yet traditional diagnosis often requires specialist visits that may not be accessible to all patients. By analyzing subtle eye movement patterns, such as saccade latency, amplitude, and corrective saccades, this study aims to develop predictive models to distinguish between children with and without amblyopia.
Patient eye-tracking data was collected and processed to extract key movement features, which were then used to train and evaluate multiple machine learning algorithms. The results demonstrate that certain eye movement biomarkers can effectively predict amblyopia risk with high accuracy, highlighting the potential of a non-invasive, automated screening tool. This approach could improve early diagnosis and intervention, making amblyopia detection more accessible and scalable in clinical and research settings.
At birth, the vitreous is firmly attached to the retina. However, with aging, the vitreous becomes more liquefied, leading to structural changes within the eye. Over time, this can result in posterior vitreous detachment (PVD), where the vitreous separates from the retina. In some cases, incomplete separation leads to vitreomacular traction (VMT), where the remaining attached vitreous pulls and distorts the macula. If VMT progresses, it can cause a macular hole, a condition that is diagnosed using optical coherence tomography (OCT).
To better understand and predict these conditions, deep learning can be applied to analyze reconstructed 3D macular OCT volumes. One key focus is measuring the area of vitreous traction on the macula, quantifying the extent of attachment between the vitreous and the macula. Additionally, analyzing the angles between the retina and the vitreous in 3D may provide better prognostic insights, as current 2D OCT slice measurements have yielded inconclusive findings. Another critical factor is the volume of subretinal fluid, which forms due to vitreous traction and can impact disease progression.
Deep learning models can enhance the accuracy and efficiency of these measurements by automatically segmenting key structures and detecting subtle changes over time. This approach has the potential to improve early detection, prognosis, and clinical decision-making for conditions related to vitreomacular interface disorders.
When your eye is scanned at the ophthalmologist, optical coherence tomography (OCT) captures multiple layers of the retina and reconstructs them into a 3D OCT volume. This volume is then processed and formatted into a 2D OCT report, which ophthalmologists rely on for glaucoma diagnosis. The 3D OCT volume itself is typically not used, as it is time-consuming and challenging to interpret without comparisons to normal populations, which are available in the 2D report.
However, these volumes contain rich structural information that can be leveraged for glaucoma diagnosis. Deep learning offers a powerful approach to processing these volumes and extracting meaningful insights to assist ophthalmologists. We propose incorporating attention mechanisms to capture long-range dependencies within these volumes and embedding them within a 3D convolutional neural network (CNN). Additionally, we developed a 3D attention visualization mechanism to analyze what the model attends to in its attention layers when making a diagnosis, alongside Grad-CAM for the convolutional layers.