About Me

I am a third-year Computer Science PhD student at Stanford University, advised by Carlos Guestrin.

I develop tools that shed light on the psychology of machines: what they think, why they err, and how they can be improved. 

Stanford (2021-2026)

Metabolomics (2020-2021)

I enjoyed my role of Deep Learning Scientist at ReviveMed from 2020 to 2021, where I patented a metabolomics signal detection pipeline. Our work led to the discovery of potential kidney cancer biomarkers, in collaboration with MIT Biological Engineering and the Broad Institute.

MIT (2014-2020)

I graduated from MIT with B.S. and MEng. degrees in Computer Science in 2018 and 2020. With the guidance of Leslie Kaelbling, I developed new machine learning and inference techniques for probabilistic graphical models. I also applied graph neural networks to problems involving 3D geometry.


I enjoy teaching and was a TA for 4 semesters at MIT. I helped run probabilistic inference and machine learning classes.

Also at MIT, Patrick Winston introduced me to research in artificial intelligence, Polina Golland and Gregory Wornell taught me how to teach, and Ferran Alet helped hone my scientific thinking.

Fun facts

I grew up on the island of Mauritius. According to my friend, because of its population of 1.3 million people, I am "one in a million".

I am fluent in 5 languages: French, Spanish, English, Hindi, Creole. I am using geopolitics to help me decide on a 6th language.

I enjoy creative writing: I enjoy poetry as a means to process the universe around us and blogging to document my backpacking trips.

Publications

1. Adversarially-learned Inference via an Ensemble of Discrete Undirected Graphical Models

Adarsh K. Jeewajee, Leslie P. Kaelbling

Neural Information Processing Systems (NeurIPS), 2020 


Published and invited for poster presentation - Paper - Video - Code - Slides


2. Robotic Gripper Design with Evolutionary Strategies and Graph Element Networks

Adarsh K. Jeewajee*, Ferran Alet*, Maria Bauza*, Max Thomsen*, Alberto Rodriguez, Leslie P. Kaelbling, Tomás Lozano-Pérez

(* equal contributions)

NeurIPS Workshop on Machine Learning for Engineering Modeling, Simulation, and Design (NeurIPS ML4Eng), 2020


Published and invited for poster presentation - Paper

3. Graph Element Networks: Adaptive, Structured Computation and Memory 

Ferran Alet, Adarsh K. Jeewajee, Maria Bauza, Alberto Rodriguez, Tomás Lozano-Pérez, Leslie P. Kaelbling

International Conference on Machine Learning (ICML), 2019 


Published and invited for oral presentation (4.5% of all submissions) - Paper - Code - Slides

Research Projects

Adversarially-learned Inference via an Ensemble of Discrete Undirected Graphical Models

Machine Learning | Probabilistic Models | Adversarial Learning

Paper | Video | Code | Slides

Training: A neural network L (learner) produces a full set of parameters (edge potentials) for the graphical model (GM), given noise. The  GM models a distribution over N random variables.
Belief propagation is run on the GM, producing node marginal probabilities, which are decoded as a data sample. 
A discriminator (not shown) judges whether the sample is real (from the true data distribution) or fake (produced manually). This signal is used to train this pipeline end-to-end.
Testing: At inference time, a subset of the variables are observed (evidence), and we produce M sets of beliefs over the unobserved nodes, conditioned on the evidence.
How do we produce this ensemble? We sample M noise vectors and run M pipelines similar to the one seen in the left image, the only difference being that the same evidence (red) is also fed to the GM.
The ensemble of beliefs is combined into one final belief vector over the unobserved variables.

Undirected graphical models are compact representations of joint probability distributions over random variables. Given a distribution over inference tasks, graphical models of arbitrary topology can be trained using empirical risk minimization. However, when faced with new task distributions, these models (EGMs) often need to be re-trained. 

Instead, we propose an inference-agnostic adversarial training framework for producing an ensemble of graphical models (AGMs). The ensemble is optimized to generate data, and inference is learned as a by-product of this endeavor. 

AGMs:

Rendering scene images from novel viewpoints

Geometric Deep Learning | Representation Learning | Auto-encoding Architectures

Paper | Blog Post | Code

9 mazes are placed in a 3x3 grid structure. Left shows generated scenes and right shows a top-down view of the 9 mazes. After propagating information about how the mazes appear in a few given locations, we query the GEN for the inferred view at new query coordinates, while rotating 360 degrees for each position. The red nodes (right) are active nodes from which information is interpolated to generate a new view, for each query location.

We investigate whether Graph Element Networks (a graph convolutional neural network architecture that we published in ICML 2019) can be used to organize memories spatially, in the problem of generating scene images from novel viewpoints.

We sample 3D mazes from the DeepMind Lab game platform (dataset) and each maze comes with a series of images. Each image reveals how the maze appears, from a specific 2D coordinate and given a specific (yaw, pitch, roll) triple for the camera. 

In the animation, we have mazes positioned in a 3x3 grid structure. The animation shows generated scenes on the left and a top-down view of the 9 mazes on the right. We first sample views from different places inside the mazes, and insert them into the GEN. We then query the GEN for the inferred view at new query coordinates, while rotating 360 degrees for each position. The red nodes (in the top-down map) are active nodes from which information is interpolated to generate a new view, for each query location.

In this problem, the GEN:

How do we decode node states to draw scene images? This work was done to improve on Deepmind's work (Eslami et. al.) where they have a representation-learning network and an image-generation network ressembling the standard DRAW architecture. They can only represent one maze at a time as their model absorbs information without spatial disentangling. We use our GENs for representation learning, and apply their standard drawing architecture to decode our hidden states.

Optimizing the shape of robotic fingers for increased object grasping accuracy on adversarially-shaped objects 

Robotics | Computer Graphics | Machine Learning

Paper  (partial)| Code (simulation) | Code (morphology optimization)

WSG-32 parallel gripper (pybullet) simulation
Left gripper finger morphology being optimized to pick up bottles

We investigate whether there exist 3D-printable robotic finger morphologies that have better object grasping performance than default finger shapes.

We open-source a WSG-32 parallel-jaw gripper simulation (see first animation), and collect a dataset of objects from distinct categories which are hard to grasp (e.g. bottles or cones or adversarial objects...) on which we would like our optimized gripper morphologies (see second animation) to have better grasp success than the out-of-the-box WSG-32 gripper configuration.

We search the space of gripper morphologies using augmented random search or evolutionary algorithms, and propose changes to the base morphology at each iteration by:

We find that: 

Learning a policy for the locomotion and morphology adaptation of a bipedal walker, on difficult terrains

Reinforcement Learning | Deep Learning | Curriculum Learning

Paper | Slides (gifs of emergent morphologies) | Code

Default-shaped agent can only learn to walk but cannot go further in this environment
By altering its body morphology, it can learn to walk and become small enough to crawl below the obstacles

We investigate whether allowing a locomotive agent to modify its own morphology has any beneficial effect on the task of finding optimal locomotive policies, on terrains of varying difficulty.

We use the augmented random search algorithm to optimize the policy of a bipedal walker (parametrized by a feed-forward neural network). We allow the agent to modify its morphology to increase its score, and by doing so we observe that:

We vary the difficulty of the terrains by making the terrain rougher (including hills and valleys), and by including obstacles such as blocks and pits.

We also search for a morphology-policy pair that generalizes to many environments with very little fine-tuning. See our slides for gifs of discovered agent morphologies.

Grouping similar questions on online discussion boards through domain adaptation

Natural Language Processing | Transfer Learning | Deep Learning with Domain Adaptation

Paper | Code

We learn a high-dimensional encoding per question and the similarity between questions is given by the cosine similarity measure between the encodings.

In online forums, a major area of interest is the consolidation of questions: merging similar questions into one, to prevent diluting answers, saving storage etc.

I. Our main goal is to train language models that can detect if two questions are similar or not. 

II. Language models require large datasets of annotated questions to be trained on and some online forums lack that much data. Our second goal is to train our models on annotated data from a forum X and fine-tune it on the small amount of annotated data available for forum Y, hoping for generalization. 

In goal I, we need to learn the best possible encoding for our questions, such that some similarity metric (here cosine similarity) between questions in that encoding is accurately measuring how similar two questions are. We train encoding architectures described by Lei et al. (using a CNN or LSTM) on our dataset of annotated questions.

For goal II, we use a domain adaptation technique from Ganin et al.. Our training pipeline now uses encoders from part I, and we again minimize the loss associated with predicting question similarity, now using a mixture of mostly annotated questions of dataset X and few from Y. 

We also pass the encoded questions from both domains into a discriminator whose job is to predicting the domain X or Y of each question using their encoded representation. 

We have two training pipelines, 

Evaluating the hardware limitations of Google Glass through augmented reality games

Augmented Reality Game Development | Wearable Devices | Performance Optimization

Video | Code

Demo of the "safari adventure" game being played in my dorm room, on Google Glass

We investigate the extent to which the Google Glass device could be used for augmented reality games, while avoiding issues such as overheating and running out of memory. We design an augmented reality game ("safari adventure") and optimize its performance to measure actual game play time that a user can get from the device.

The game can be run on any Android device and has been optimized for Google Glass. The game uses one's camera and accelerometer. It was designed using the Unity3D engine, with C# as scripting language. The animals merge into one's actual room decor, and extensions of this game could be used to help children get acclimatized to new surroundings, like classrooms etc.

The logic behind the game is rather simple: 

The game starts lagging after about 60 seconds and the Google Glass gets too hot to be worn after about 120 seconds. Running the game plus a screen recorder was overkill for the hardware which is why the video/gif is choppy. 

After optimizing for performance, game time was boosted to around 110 seconds with no lagging and to about 200 seconds without overheating. Some performance optimization ideas were to: