Zahid Hassan Tushar , Adeleke Ademakinwa, Jianwu Wang, Zhibo Zhang and Sanjay Purushotham, “Cloud Optical Thickness Retrievals Using Angle Invariant Attention Based Deep Learning Models.”, 2025 IEEE International Conference on Image Processing (ICIP), September 2025.
Zahid Hassan Tushar , Adeleke Ademakinwa, Jianwu Wang, Zhibo Zhang and Sanjay Purushotham, “Joint Retrieval of Cloud properties using Attention-based Deep Learning Models.”, 2025 IEEE International Geoscience and Remote Sensing SymposiumI (IGARSS), Aug 2025.
Zahid Hassan Tushar , Adeleke Ademakinwa, Jianwu Wang, Zhibo Zhang and Sanjay Purushotham, “CloudUNet: Adapting UNet for Retrieving Cloud Properties”, 2024 IEEE International Geoscience and Remote Sensing SymposiumI (IGARSS), July 2024, page 8170–8174.
M. M. Islam and Z. H. Tushar, "Interpreting and Comparing Convolutional Neural Networks: A Quantitative Approach," 2021 5th International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), 2021, pp. 1-6, doi: 10.1109/ICEEICT53905.2021.9667854.
Musharrat Shabnam, Imtiaz Hasan Chowdhury, Zahid Hassan Tushar, Sunjida Sultana and Md Hossam-E-Haider “Performance Evaluation Of GNSS Receiver In Multi-Constellation System”, International Conference on Electrical, Computer and Communication Engineering, 16-18 February 2017, Cox’s Bazar, Bangladesh.
Zahid Hassan Tushar , Adeleke Ademakinwa, Zhibo Zhang and Sanjay Purushotham, “Enhancing Aerosol and Cloud Retrievals Based on Hyperspectral Observations with Deep Learning: A Case Study with PACE-OCI”, presented at UMBC COEIT Research Day 2025, and UMBC Annual Research Symposium 2025.
Zahid Hassan Tushar , Adeleke Ademakinwa, Zhibo Zhang and Sanjay Purushotham, “Enhancing Aerosol and Cloud Retrievals Based on Hyperspectral Observations with Deep Learning: A Case Study with PACE-OCI”, presented at UMBC COEIT Research Day 2025, and UMBC Annual Research Symposium 2025.
Zahid Hassan Tushar , Adeleke Ademakinwa, Jianwu Wang, Zhibo Zhang and Sanjay Purushotham, “Joint Retrieval of Cloud properties using Attention-based Deep Learning Models.”, presented at UMBC COEIT Research Day 2025, and UMBC Annual Research Symposium 2025.
Zahid Hassan Tushar , Adeleke Ademakinwa, Jianwu Wang, Zhibo Zhang and Sanjay Purushotham, “Joint Cloud Optical Thickness and Cloud Effective Radius Property Retrievals using Attention-based Deep Learning Models”, AGU Annual Meeting, December 9-13, 2024, Washington DC, USA.
Zahid Hassan Tushar and Sanjay Purushotham, “A Study of Federated Adversarial Learning for Automatic Speech Recognition”, presented at the Annual Review Meeting of Army Research Laboratory, University of Maryland College Park, May 2022.
Zahid Hassan Tushar and Sanjay Purushotham, “A Study of Federated Adversarial Learning for Automatic Speech Recognition”, presented at IS Symposium 2022, Department of Information System, UMBC, April 2022.
Title: Self-supervised Deep Learning for Image Classification
Supervisor: Dr. Marcos Escudero Vinolo, Assistant Professor, Universidad Autonoma de Madrid (UAM), Spain.
and Dr. Pablo Carballeira López, Assistant Professor, Universidad Autonoma de Madrid (UAM), Spain.
Abstract: Convolutional neural networks (CNNs) have revolutionized artificial vision analysis as these networks yield close-to-human accuracy for challenging vision tasks by utilizing the large annotated datasets. However, generating and annotating these datasets are time consuming and sometimes require expensive expertise for some domains such as medical imaging. Self-Supervised Learning (SSL) has proven to be a successful strategy to tackle this problem. SSL does not use annotations; it generates pseudo labels by means of a pretext task (e.g. recognizing different augmented view of the same image) to train the CNNs high level semantics that are useful for solving vision tasks by re-training the CNN with small annotated datasets. The use of multitasks has proven to increase the performance over the use of single pretext task. However, the joint optimization of multiple tasks has the risk of inter-task interference. Curriculum learning suggests that a CNN learns better features when the training samples are provided following an increasing complexity order. It provokes the question: is there any complexity order in the pretext tasks that enforces a CNN in such a way to learn better features in a multi-tasks SSL? To investigate this question, a general framework was necessary that would allow selfsupervised training with multiple pretext tasks and testing them on established benchmarks. The formulation of such framework was boosted by the OpenSelfSup framework as it gathered implementations of popular SSL pretext tasks in a single place. Essential modifications of OpenSelfSup framework were performed to facilitate the experiments of combining multiple pretext tasks. A set of experiments were executed on VOC07 and ISIC2017 dataset to shed light on the former posed question. In most of the cases, there was no improvement in the performance of these models after multi-task training. But there were exceptions: the performance of the SimCLR model pretrained on ImageNet ameliorated upon re-training with BYOL pretext task on VOC07, and same happened for the MoCo model pretrained on ImageNet after being re-trained with BYOL pretext task on ISIC2017. Those results entails that the aforementioned pretext tasks may be complementing each other towards learning a better feature representations.
Ttitle: Performance Evaluation of GNSS receiver in Multi-constellation system based on DOP, Reliability, and Interoperability
Supervisor: Dr. Hossam-E-Haider, Professor, Dept. of EECE, MIST, Dhaka, Bangladesh.