My deep learning journey is deeply rooted in my background in digital signal processing and radar systems. With over three years of experience in designing, training, and deploying neural networks using TensorFlow and PyTorch, I have consistently pushed the boundaries of real-time signal processing and human activity recognition. My research has always focused on making models more interpretable, computationally efficient, and applicable in real-world scenarios like radar sensing, wearable devices, and autonomous systems.
My deep learning work primarily serves the following domains:
RF Sensing & Radar-Based Classification
Designed and developed multiple CNN and complex-valued neural networks for classification of raw radar IQ data, micro-Doppler spectrograms, and RF modulation waveforms. My networks have been tailored for:
Human activity recognition
Sign language classification
Traffic signaling motion detection
Automatic modulation recognition (AMR)
Time-Frequency Domain Learning
Developed HRSpecNet and HRFreqNet, deep architectures that enable the generation of high-resolution spectrograms and frequency estimates directly from 1D time-domain radar signals—overcoming traditional resolution limitations of STFT and FFT.
Explainable AI (XAI) and Learnable Filters
My most impactful contribution lies in designing Parameterized Learnable Filters (PLFs) like Sinc, Gaussian, Gammatone, and Ricker filters. These are integrated as the first layers of CNNs to offer:
Interpretability: Learnable filters highlight meaningful frequency features.
Efficiency: Reduced model size and computational complexity.
Real-time Inference: Ideal for deployment in embedded platforms and edge devices.
Sensor Fusion and Multimodal Networks
Built multimodal fusion networks that combine LiDAR point cloud videos and radar micro-Doppler images for advanced perception tasks, particularly in autonomous vehicle scenarios.
Wi-Fi-Based Deep Learning
Conducted comparative deep learning studies using mmWave radar and Wi-Fi CSI data, showcasing deep neural networks' versatility in classifying human activities across RF modalities.
PLFNet: Interpretable, parameterized filter-based CNN for raw RF classification
IEEE Transactions on Radar Systems, 2024
CV-SincNet: Complex-valued Sinc filters for radar-based ASL classification
IEEE Transactions on Radar Systems, 2023
HRSpecNet: Spectrogram super-resolution using autoencoders and UNET
IEEE Transactions on Radar Systems, 2024
Radar-LiDAR Sensor Fusion: Deep learning for classifying traffic signals
IEEE Radar Conference, 2023
Learnable Gaussian Filters for AMR: Proposed Gaussian-based filter CNNs for raw IQ RF waveform classification
Accepted, IEEE RadarConf 2025
I’m currently exploring the use of time-gated complex-valued learnable filters in CNNs to directly classify a 56-class real-world RF waveform dataset, aiming to push the limit of interpretability and robustness in unknown channel conditions. I am also working on a lightweight CNN using learnable 2D Gaussian filters for real-time ultrasound image classification, as well as bearing fault diagnosis in rotary machinery.