GitHub Repo : github.com/Oshada-Kasun/skin-cancer-detection
Demo URL: click here
Model development: VGG19 pre-trained network, data augmentation on malignant images, fine-tuning, hyper-parameter tuning, overall test performance is 0.81 F1-score.
Included explainable AI using GradCam to generate heatmaps to improve confidence and reliability of results.
Developed webapp using FastAPI and deployed using Docker in AWS EC2 instance. The UI allows users to upload skin cancer images and it returns a class label (benign or malignant/Melanoma), a class probability and a heatmap which discriminates regions of importance for correct classification.
The skin images are randomly selected without regard to the age, sex and the type of diagnosis to avoid representation bias in the data.
We start with the exploratory data analysis, where we see the details about the malignant and benign samples.
This is followed by distributing all malignant and randomly chosen benign images into Train/Validation/Test datasets.
We augmented the malignant images to make a 50:50 distribution of benign and malignant images for the training dataset.
We compare the results of pre-trained models of EfficientNet B3, B4, B5 and B6, InceptionV3, and VGG19, and the latter turns out to be far superior. So, we fine-tune the pre-trained VGG19 model before fitting it to the training data.
The heatmaps are generated using Grad-CAM (Gradient-weighted Class Activation Mapping) to incorporate Explainable AI. After that the trained model has been saved into TFLite and H5 file formats for the deployment purpose.
Deployed as a Web App, using Fast API, for producing predictions (whether it is malignant or benign) along with the associated confidence value of that prediction and the heatmap.
Finally, the Web App is containerized using Docker and deployed via AWS using the EC2 instance with configuring all necessary components .