Skin diseases are the fourth most common human disease in the world affecting nearly one-third of the world population. Even though they are often underestimated, despite their visible symptoms and burdens, they can be even deadly at severe stages. Hence, it is crucial to diagnose skin diseases at early stages and direct victims to medications at the earliest possible.
To eliminate the delays in the diagnosing processes, many researchers try to come up with AI assistants capable of diagnosing skin diseases. These assistants can support dermatologists to make faster and accurate decisions. However, it is noticeable that dermatologists, like most of the other medical practitioners, hesitate to trust these AI assistants due to their black-box nature, despite the accuracy of the approaches. This matter highlights the need for applications that facilitate predictions with explanations based on the process of how the decision was made by the system. It has also come to our attention that most of these models only support diagnosing a few diseases although critical diseases could emerge in the future to which these applications could be essential. But in such cases, rebuilding the application for new diseases is inefficient and will multiply costs. The solution to this issue is to come up with a flexible skin disease classifier that will make space and accept new diseases.
This study uses Deep learning techniques to classify melanoma skin cancer images with explainability. The aim is to improve the classification accuracy of skin images and provide a trustworthy and usable automated support tool for skin cancer diagnosis.
Our work will propose a deep learning-based interpretable continual skin disease classifier embedded in a responsive web interface. It will use a human reasoning equivalent decision-making approach and will produce predictions with human interpretable reasonings. It will also use a replay-based class incremental learning approach to learn to classify new diseases continually.
We utilized the images extracted from HAM10000 and ISIC public datasets. Initially, we investigate the effectiveness of a mask-guided (segmentation-based) deep-learning method for melanoma diagnosis. Then, we apply two main approaches, Convolutional Neural Networks (CNN) and Vision Transformer (ViT) models for melanoma identification. Next, explainability methods such as GradCAM++ and GradCAM are utilized to provide interpretable results. The outcome of this research is to develop an efficient and interpretable classification model that can be integrated into a web application for clinical use as a second opinion tool.
Our current implementation achieved 98.3% classification accuracy for the Xception model and 92.79% for the ViT-based model using mask-guided approaches. The U2Net model generated highly accurate segmentation masks.
This study can be extended to different research directions. The proposed approach can be tested with much larger and diverse datasets, to explore generalizability. The union of attribute masks can be utilized instead of using segmentation masks for each classification approach.
Datasets:
Derm7pt : Composed of 1011 images of melanoma and non-melanoma skin lesions
User study for results validation
System usability study for the web application
Resource person (Project 2)
Supervisor:
Resource person (Project 1)
Ms. Fathima Afra
Ms. Shehana Iqbal
Supervisor: