My research interests span several aspects of machine learning incorporating topics such as adversarial deep learning, machine learning-based target detection/recognition, security and privacy, and uncertainty quantification. See below for a sample of selected projects that I have worked on.
The recent exponential growth of wireless traffic has resulted in a crowded radio spectrum, which, among other factors, has contributed to reduced mobile efficiency. Several works have proposed using deep learning and AI techniques to alleviate the inefficiency induced in shared spectrum environments by dynamically extracting meaningful information from massive streams of wireless data. Yet, deep learning classifiers have been shown to be susceptible to adversarial jamming attacks: low-powered interference signals, which are injected into transmitted signals to induce erroneous behavior from deep learning models at the receiver. In this work, we investigated methods to mitigate the effects of such adversarial interference attacks through (i) detecting the presence of adversarial signals and (ii) developing signal processing algorithms to increase the classification rate of adversarial signals.
Resulting Works:
R. Sahay, C. Brinton, and D. Love, "A Deep Ensemble-Based Wireless Reciever Architecture for Mitigating Adversarial Attacks in Automatic Modulation Classification," in IEEE Transactions on Cognitive Communications and Networking, 2021.
R. Sahay, D. Love, and C. Brinton, "Robust Automatic Modulation Classification in the Presence of Adversarial Attacks," in Proc. of 2021 IEEE Conference of Information Science and Systems (CISS).
R. Sahay, M. Zhang, D. Love, C. Brinton, "Defending Adversarial Attacks on Deep Learning-Based Power Allocation in Massive MIMO Using Denoising Autoencoders," in IEEE Transactions on Cognitive Communications and Networking, 2023.
S. Wang, R. Sahay, C. G. Brinton, "How Potent are Evasion Attacks for Poisoning Federated Learning-Based Signal Classifiers?" in Proc. of IEEE International Conference on Communications (ICC), 2023.
The trustworthiness of deep learning image classifiers have received increasing attention due to their susceptibility to adversarial attacks: subtle and humanly imperceptible perturbations on images crafted to induce high-confidence misclassifications by trained deep learning classifiers. In this work, we developed multiple denoising autoencoder-based dimensionality reduction schemes to filter additive adversarial noise out of images at test time. In addition, we developed deep learning-based detectors to identify samples corrupted with adversarial noise. We found that our mitigation methods increased classification accuracy by up to 80% against adversarial images and our detection methods achieved true positive rates over 90%.
Resulting Works:
K. Sivamani, R. Sahay, and A. E. Gamal, "Non-intrusive Detection of Adversarial Deep Learning Attacks via Observer Networks," in IEEE Letters of the Computer Society, 2020.
R. Sahay, R. Mahfuz, and A. E. Gamal, "Combatting Adversarial Attacks through Denoising and Dimenisonality Reduction: A Cascaded Autoencdoer Approach," in Proc. of 2019 IEEE Conference of Information Science and Systems (CISS).
R. Sahay, R. Mahfuz, and A. E. Gamal, "A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks," arXiv: 1906.05599, 2019.
Although deep learning models are robust for target detection, they produce point estimates at test time with no associated measure of uncertainty. Uncertainty quantification (UQ) is, however, pivotal for security sensitive applications, which require deep learning models to quantify their confidence associated with a particular prediction. In this work, we developed an ensemble framework consisting of several models, which produces a distributive estimate (as opposed to a point prediction) at test time. From the distributive estimate, we were able to calculate corresponding prediction intervals and understand when the models were unsure about their predictions and, thus, required additional algorithmic analysis or human intervention.
Resulting Works:
R. Sahay, J. Stubbs, C. Brinton, G. Birch, "An Uncertainty Quantification Framework for Counter Unmanned Aircraft Systems Using Deep Ensembles," in IEEE Sensors Journal, 2022.
R. Sahay, G. Birch, J. Stubbs, and C. Brinton, "Uncertainty Quantification-Based Unmanned Aircraft System Detection Using Deep Ensembles," in Proc. of Spring-2022 IEEE Vehicular Technology Conference, 2022.
R. Sahay, D. Reis, J. Zollweg, and C. Brinton, "Hyperspectral Image Target Detection Using Deep Ensembles for Robust Uncertainty Quantification," in Proc. of IEEE Asilomar Conference on Signals, Systems, and Computers, 2021.
In large courses, with low instructor to student ratios, students can often benefit from learning from one another through structured interactions. Identifying students to answer certain questions, however, is a challenging task, which is further exacerbated in large massive open online courses (MOOCs) often containing hundreds or thousands of students. To alleviate this challenge, we developed a deep learning-based link prediction methodology to predict which learner pairs are the most likely to form links (i.e., benefit from interacting) in a Social Learning Network (SLN), which is a graph with each node denoting a learner and each edge denoting an interaction between learners (e.g., on discussion forums). We evaluated our link prediction methodology on multiple MOOCs as well as in an undergraduate ECE course at Purdue University. Our method outperformed several state-of-the-art prediction methodologies, such as graph neural networks, which indicated the potential of our method for improving the quality and learning experience in large, online courses.
Resulting Works:
R. Sahay, S. Nicoll, M. Zhang, T. Y. Yang, C. Joe-Wong, K. A. Douglas, C. G. Brinton, "Predicting Learner Interactions in Social Learning Networks: A Deep Learning Enabled Approach," IEEE/ACM Transactions on Networking, 2023.
Resulting Works:
R. Sahay, "Deep Localization of Structural Impacts," 2018.
S. Kim, S. Shiveley, A. Douglas, Y. Zhang, R. Sahay, D. Adams, and J. Harley, "Efficient Storage and Processing of Large Guided Wave Datasets with Random Projections," in Structural Health Monitoring, 2020.
Ultrasonic guided waves are often used to evaluate the structural integrity of infrastructure such as roads, aircrafts, bridges, dams, etc. by comparing real-time measurements with initial baseline collections, where differences between the two signals are indicative of damage. Yet, guided waves are highly prone to distortions, leading to false positive damage detection, due to environmental factors such as temperature, air pressure, sunlight, and humidity. In the first part of this work, we engineered a structural health monitoring system with environmental sensor circuitry capable of simultaneously capturing ultrasonic guided waves and environmental characteristics in order to evaluate the effects of extraneous factors on guided waves. The system was deployed outdoors in an uncontrolled environmental setting, where we collected measurements over the course of six months. In the second part of this work, we aimed to localize structural impacts under signal uncertainty, such as waves with varying velocities, using deep learning. In this capacity, we were able to correctly localize seven out of ten intentionally induced impacts on a real-world structure when our prediction model was trained on simulated waveforms.