Research

Project #1:

Interpretable CNNs for Pneumonia Detection using Captum Library 

Explainable AI (XAI) is a subset of artificial intelligence (AI) that focuses on creating models and systems that can provide human-understandable explanations for their decisions and actions. Explainable AI models are gaining importance due to the focus on human-centered design principles, taking into account the needs, expectations, and cognitive abilities of users. Unlike other deep learning networks, which are often “black boxes”, explainable AI models are designed to be transparent, meaning they provide visibility into how they arrive at their predictions or decisions. Thus, XAI models are designed to provide interpretable and understandable explanations for their decisions and actions, allowing people to understand how and why the AI system makes certain predictions or decisions.

 XAI models can provide valuable assistance to radiologists and other healthcare professionals in interpreting X-ray images in several ways. First, XAI models can provide explanations for their predictions, allowing radiologists to understand the reasoning behind the model's decision. For example, an XAI model could highlight areas of the X-ray image that have most affected the prediction, or provide textual or visual explanations that describe features or patterns that contributed to the prediction. This can help radiologists verify model results, gain insights into the model's decision-making process, and build confidence in the reliability of the model. Second, such improved models can help identify potential errors or biases in the X-ray image interpretation process. For example, if the model's explanation shows that the prediction was based on a small area of the image, the radiologist can double-check that region for any artifacts or misinterpretations. This can serve as a valuable error-checking mechanism and improve the overall accuracy of X-ray image interpretations. Finally, XAI models can enhance trust and transparency in the AI-assisted interpretation of X-ray images. XAI models can provide interpretable explanations that help radiologists explain and communicate the basis for their diagnoses, increasing transparency and confidence in the decision-making process. In this research, we apply explainable AI to X-ray images of the lungs in order to diagnose lung infections such as pneumonia and abnormal buildup of fluid in the lungs. 

Overview

Project 1 focuses on implementing eXplainable AI (XAI) techniques using the Captum library to create interpretable convolutional neural networks (CNNs) for pneumonia detection from lung images. By integrating XAI methods into the CNN model architecture, the project aims to enhance transparency, trust, and understanding of the model's decisions, ultimately improving diagnostic accuracy and clinical utility.

Objectives

Methodology

Expected Outcomes

Conclusion

Project 1 aims to bridge the gap between AI-driven disease detection and clinical practice by developing interpretable CNN models for pneumonia detection. By integrating the Captum library and XAI techniques into the model architecture, the project seeks to provide clinicians with transparent and trustworthy insights into the model's predictions, ultimately improving patient outcomes and advancing the field of medical imaging analysis.



Project #2:

Augmented Reality Molecular Viewer with Speech Recognition 

Augmented reality (AR) has many potential applications in chemistry, from teaching and learning to research and development. One of the applications of AR is virtual lab simulations. AR can provide students with a realistic laboratory experience without the need for expensive equipment and hazardous chemicals. Students can manipulate virtual instruments and chemicals, perform experiments, and make observations in a safe and controlled environment. AR can also be used for collaboration and communication by providing a shared virtual workspace. Researchers can visualize and manipulate molecular structures together, share data and ideas, and communicate in real-time. AR can be used to visualize complex molecular structures and interactions in 3D, allowing researchers to understand the behavior of molecules better and design new drugs and materials.

Here we develop an AR mobile application for molecular viewer. AR mobile apps can offer a unique and accessible experience in molecular research which was not achieved in existing AR tools. Such a AR mobile application would not require a high-end computer and AR headset to run and can be accessible by more people. A speech recognition function can also be implemented for this application. Speech recognition function can make an application more accessible to people with disabilities, including those who have mobility problems, visual impairments, or learning disabilities. It can also help users to complete their tasks more quickly and efficiently, as they can dictate text instead of typing it. By allowing users to interact with an application using speech, speech recognition can make the user experience more intuitive and natural, so there is no need to tediously learn the app`s navigation features.

Overview

The VR Molecular Viewer project aims to develop an immersive virtual reality (VR) application for visualizing molecular structures with augmented reality (AR) features. The application will incorporate speech recognition for navigation and a machine learning (ML) model for recognizing handwritten molecular formulas and generating files for 3D molecules.

Objectives

Methodology

Expected Outcomes

Conclusion

The VR Molecular Viewer project aims to combine VR technology, AR features, speech recognition, and machine learning to create an innovative tool for visualizing and interacting with molecular structures. By providing an immersive and intuitive interface, the application seeks to enhance learning and research in the field of chemistry and molecular biology, offering new possibilities for education, exploration, and discovery.


Other Projects We Are Working On:

Project 3:

Application of Layer-wise Relevance Propagation (LRP) to a Custom ML Model for Cancer and Pneumonia Detection from X-Ray Images

Overview

This project focuses on the development and application of Layer-wise Relevance Propagation (LRP) methods to a custom machine learning (ML) model designed for the detection of cancer and pneumonia from X-ray images. The goal is to enhance the interpretability and diagnostic accuracy of the ML model by leveraging LRP techniques to understand and visualize the decision-making process of the model.


Objectives

Develop a Custom ML Model: Create a robust ML model capable of accurately detecting cancer and pneumonia from X-ray images.

Apply LRP Techniques: Implement Layer-wise Relevance Propagation to interpret the model’s predictions and provide visual explanations for its decisions.

Evaluate Performance: Assess the performance and interpretability of the ML model using standard metrics and LRP visualizations.

Enhance Diagnostic Confidence: Improve the diagnostic confidence of medical practitioners by providing transparent and interpretable model predictions.

Methodology

Data Collection and Pre-processing:

Data Source: Gather a large dataset of labeled X-ray images for both cancer and pneumonia, along with healthy controls.

Pre-processing: Normalize the images, apply augmentation techniques to increase dataset variability, and split the data into training, validation, and test sets.

Model Development:

Architecture Design: Design a convolutional neural network (CNN) tailored for image classification tasks, focusing on detecting cancer and pneumonia.

Training: Train the CNN model using the training dataset, employing techniques such as early stopping, learning rate adjustments, and regularization to optimize performance.

Evaluation: Validate the model using the validation dataset, and fine-tune hyperparameters to achieve the best possible performance.

Layer-wise Relevance Propagation (LRP):

LRP Implementation: Implement LRP techniques to trace back the contributions of individual pixels in the X-ray images to the model’s final prediction.

Visualization: Generate heatmaps highlighting regions of the X-ray images that significantly contribute to the prediction of cancer or pneumonia.

Model Interpretation and Analysis:

Interpretability: Analyze the LRP-generated heatmaps to understand the model’s decision-making process and ensure that the model focuses on medically relevant features.

Performance Metrics:

Accuracy and F1 Score: Evaluate the model’s classification performance using accuracy, precision, recall, and F1 score.

AUC-ROC: Assess the model’s ability to distinguish between classes using the Area Under the Receiver Operating Characteristic Curve (AUC-ROC).

Interpretability Metrics: Measure the interpretability of the model using qualitative assessments of the LRP heatmaps by medical experts.

Expected Outcomes

High-Accuracy Detection: A high-performing ML model capable of accurately detecting cancer and pneumonia from X-ray images.

Interpretable Predictions: Enhanced interpretability of the model’s predictions through LRP-generated visual explanations.

Clinical Relevance: Increased trust and confidence among medical practitioners in the model’s predictions, facilitating its potential integration into clinical workflows.

Conclusion

By integrating Layer-wise Relevance Propagation with a custom ML model, this project aims to advance the field of medical image analysis by providing a reliable, accurate, and interpretable tool for the detection of cancer and pneumonia from X-ray images. The combination of high diagnostic accuracy and enhanced transparency is expected to significantly aid medical professionals in making informed decisions, ultimately improving patient outcomes.



Project 4:

Mobile Application for Early Detection of Pneumonia and Tuberculosis Using Advanced Image Analysis

Overview

This project aims to develop a mobile application designed to assist in the early detection of pneumonia and tuberculosis using advanced image analysis techniques. The application will leverage a dataset of chest X-ray images to identify four types of lung diseases: bacterial pneumonia, viral pneumonia, tuberculosis, and COVID-19. The software development will utilize XCode for iOS and Android Studio for Android platforms.

Objectives

Methodology

Expected Outcomes

Conclusion

By combining advanced image analysis techniques with mobile technology, this project aims to provide an innovative tool for the early detection of pneumonia and tuberculosis. The application will offer a reliable and accessible solution for medical professionals and patients, contributing to improved healthcare outcomes through timely diagnosis and intervention.


Project 5: 

Convolutional Neural Network Model to Determine the Location of Microvasculature Structures (Blood Vessels) within Human Kidney Histology Slides

Overview

This project aims to develop a convolutional neural network (CNN) model to accurately identify and locate microvasculature structures, including blood vessels such as capillaries, arterioles, and venules, within human kidney histology slides. Leveraging a dataset of annotated histology slides, the project will utilize Anaconda and Python for software development to create a robust and precise image analysis tool.

Objectives

Methodology

Expected Outcomes

Conclusion

This project aims to advance the field of histopathological image analysis by developing a state-of-the-art CNN model for detecting microvasculature structures in human kidney histology slides. The combination of high accuracy, robust implementation, and clinical relevance is expected to provide a valuable tool for medical research and diagnostics, ultimately contributing to improved understanding and treatment of kidney diseases.


Project 6: 

Super-resolution and Image Quality Enhancement Using a Generative Adversarial Network 

Overview

This project aims to design a Generative Adversarial Network (GAN) architecture capable of not only enhancing image resolution but also improving overall image quality by addressing issues such as noise reduction and artifact removal. The enhanced images generated by the GAN will be utilized to improve the accuracy of a ResNet18 model in analyzing chest X-ray images. The software development will be carried out using Anaconda and Python, complementing the image analysis capabilities of Project 1.

Objectives

Methodology

Expected Outcomes

Conclusion

By combining the capabilities of GAN-based image enhancement with deep learning-based image analysis, this project aims to improve the accuracy and reliability of chest X-ray diagnostics. The generation of high-quality images free from noise and artifacts is expected to enhance the performance of the ResNet18 model, ultimately leading to better patient outcomes and more effective healthcare interventions.