This project is supervised by Dr. James Wang, and is in collaboration with neuroscience experts Dr. Mark Eckert and Dr. Kenny Vaden, from Medical University of South Carolina.
Brain MRI scans are rich and high dimensional images which make it very challenging to train traditional machine learning and deep learning algorithms for automatic feature extraction. Due to difference in measurements across equipment from different sites, multi-site datasets are even more challenging to learn from. The dataset we are using was collected from 7 different sites and span different age groups (from 7-11) and different genders.
We are proposing a novel Deep Learning Convolution-based architecture that can generate unsupervised embedding from the high dimensional brain data, and then learn to classify images as controls and cases. We are evaluating various training strategies to verify the scalability and generalization of our model across different sites and age-groups.
We propose a method for model explain-ability that is able to analyze our trained models and identify regions in the brain that are most associated with Dyslexia.
Our methods have shown demonstrated robustness to the above challenges and we have been able to match the state-of-the-art accuracy reported in the Dyslexia Prediction literature. We are excited to publish the work in a neuroscience journal soon!
3D-view of the regions of the brain identified by the Network to be most sensitive on Dyslexia prediction
This is my Master's Thesis work under the supervision of Dr. Ioannis Karamouzas.
We are using various publicly available pedestrian datasets to evaluate various scene-descriptors and train neural networks to learn to produce collision-free motion in crowded environments.
The primary challenge is to represent different pedestrian datasets available in an agent-agnostic and scene-agnostic way and be able to train a deep learning network that can generalize across all scenes and all agents.
Right: The descriptor received as input by the red agent is a Lidar Scan of its neighbor. Left: A simulation of the classic 6-pedestrian circle scenario operating on the Lidar Scan using our Neural-Network based-method.
This is my final project for the course CPSC 8580 Security in Emerging Systems
I am implementing the excellent paper,"Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and Privacy 2019.
I have also implemented attacks on more datasets other than the ones mentioned in the paper.
As additional experiments that further evaluate the paper's results, I am targeting to infect multiple labels of the network and experimenting how the defender might be able to detect and prevent such attacks.
The infected label in MNIST dataset
The reverse trigger obtained for MNIST dataset
Functional Magentic Resonance Imaging (fMRI)data is represented as a 4D tensor (3D space + 1D time) with close to 1 million features per scan. With very limited number of images available compared to the number of features, learning useful representations of the data remains a challenging task with traditional Machine Learning approaches.
In this project, we explore the idea of using Deep Learning based unsupervised training methods to obtain latent spatio-temporal embeddings of the data. Specifically, we use 3D Convolution-based Autoencoders to extract spatial features of the data, and an Attention-based Sequence to Sequence encoder-decoder recurrent network to capture the temporal characteristics.
Having learned these non-linear low-dimensional features for the data, we then use these latent representations for supervised prediction of Autism Spectrum Disorder (ASD) using traditional machine learning techniques such as Kernel SVM, logistic regression, etc.
Original and Reconstructed images after training the Deep 3D Convolutional Autoencoder
The goal of this project was to perform emotion recognition of any arbitrary topic and classify it as "Anger, Sad, Concern, Joy" based on the comment section of the most popular YouTube videos on the same topic.
The first step of the process was to find the top K YouTube videos related to a specific search query topic. Then the most popular N comments from each video was mined and stored in a database. These steps were done with the use of the excellent YouTube API. The collected comments were then processed using traditional NLP pipeline (stop-words removal, tokenization, lemmatization, etc) to construct the pre-processed data.
To train an emotion classifier, existing emotion-recognition datasets were used like DepecheMood. Pre-trained word embeddings from GloVe or Word2Vec were used as initial embedding for corresponding words. A bi-directional LSTM network was then trained end-to-end to predict emotion from sentences. The same network was then used to predict the emotion from the YouTube comments.
Images generated by DCGAN & WGAN
These are some of the works from CPSC 8430 - Deep Learning course (GitHub)
Deep Neural Networks, Convolutional Networks, Gradient Analysis
S2VT Implementation: Automatic Video Captioning with Sequence to Sequence models.
Generative Adverserial Networks: I trained various well known GAN architectures, namely: Vanilla-GAN, DCGAN, WGAN, WGAN-GP on the CIFAR-10 dataset.
Here we designed wireframe and prototypes of a utility payment website using Balsamiq Software
The UX designing was done based on the Norman's Principles, which target on designing of product with a user-centered outlook.
This was my final project for my undergraduate degree in Computer Science.
We developed a Java-based application system to detect plagiarism in documents.
We used the Rabin-Karp string matching algorithm to detect similarities between a new proposal and documents from an existing database.
Here is a journal link for the work.
Innovated an app to remind users to pay their premium before deadline and send notifications to dependents
Designed an android application that will evaluate various aspects of environment and find interference patterns in the subject image and produce a 3D image using a pyramid prism projector
More details on the project can be found on GitHub
If you think, "Her profile aligns with our goal"! Here is my Resume and LinkedIn Profile
Get in touch with me foram2494@gmail.com