Deep learning has achieved remarkable performance across many fields—but it remains vulnerable to adversarial threats and interpretability challenges.
My research focuses on developing trustworthy and secure AI systems. I investigate adversarial attacks such as FGSM and PGD, design defense strategies against them, and apply deep learning to real-world problems including data analytics and image classification.
By addressing both vulnerability and applicability, I aim to improve the robustness and reliability of deep neural networks in practical environments.
Despite the strong performance of Vision Transformers (ViTs) in image classification tasks, they remain vulnerable to adversarial examples—inputs crafted to fool the model with imperceptible perturbations. This vulnerability poses serious concerns for real-world deployment.
In this study, we apply adversarial training to a ViT-B32 model on the CIFAR-10 dataset to improve robustness against three major attack types: FGSM, PGD, and CW. Experimental results demonstrate that incorporating adversarial examples into the training process significantly enhances the model's resistance to attacks, especially when using a combination of FGSM and PGD. Notably, defense against CW attacks required their explicit inclusion during training.
Our findings show that adversarial training can maintain high classification accuracy on clean images while significantly improving robustness under adversarial conditions. This highlights the practical potential of ViT models in secure and trustworthy AI systems.
We trained a federated learning system on the NIH ChestX-ray14 dataset, splitting it across 50 clients to simulate multiple hospitals. Each client trained locally on heterogeneous models (ResNet, DenseNet, VGG, etc.), and the server aggregated updates using FedAvg to build a global model. Our approach achieved accuracy within 1–2% of centralized training while preserving patient privacy. Communication-efficient strategies reduced training time by about 30%, and the model converged reliably despite client heterogeneity. This work demonstrates a practical way to build privacy-preserving medical AI systems without sacrificing performance.