Introduction

I have not updated this page for a long time! Currently I am working on human augmentation and here is my most recent paper and here is my most recent presentation. 

Welcome to my research page! I love idea generation that is super-saturated with deep discussions that explore vast unknown multidisciplinary creative spaces, both within and outside of science, some of which carry the hope of fertile collaborations.  I love collaborations, so totally don't hesitate to contact me

I am a research scientist in the Ashton Graybiel Spatial Orientation Lab at Brandeis University where I also completed my graduate and postdoctoral research all thanks to my wonderful advisors, Paul DiZio and James Lackner.  I am also grateful for support, guidance and funding from the Translational Research Institute for Space Health.  

My research is focused on understanding human spatial disorientation and developing countermeasures that have applications for spaceflight, military aviation, and vestibular disorders.  Below I give a compressed overview of my prior work.  The video at the end of the page provides a more thorough and understandable introduction to my paradigm.  Alternatively, you can read the page on my graduate research and postdoctoral research where I provide deeper and more understandable explanations. 

In the Research Thrusts section, I describe the four main thrusts of my research which include Human Augmentation, Machine Learning/Artificial Intelligence, The Basic Science of Spatial Disorientation and Educational Outreach.  

Vertical Roll Plane

Horizontal Roll Plane (spaceflight analog)

Overview of My Prior Work

Here I provide a compressed overview of my prior work.  To obtain a more thorough and easier to understand explanation of my research, visit my graduate projects and my postdoctoral projects, and the links in the text below will bring you to the relevant subsections. You can also watch the video at the end of this page for a more understandable description of my experimental paradigm.  

What do people learn when balancing in the absence of peripheral mechanisms (legs and reflexes)?

Blindfolded subjects were strapped into a machine that was programmed to behave like an inverted pendulum in the Vertical Roll Plane (see image above), and were instructed to use a joystick to stabilize themselves about the balance point which was set at the gravitational vertical.  Because they tilted relative to the gravitational vertical, they could use gravitational cues to obtain a good sense of their angular position and they showed robust learning across multiple metrics motivated from phase plots and the stabilogram diffusion function [1].  Read the summary of my first paper to learn about our findings which have relevance to vehicle/helicopter control, the role of central processing for balance control, and  intermittent vs continuous control.

What do people learn when balancing in the absence of gravitational cues in our spaceflight analog task?

Subjects were placed into the same machine that was programmed with inverted pendulum dynamics in the Horizontal Roll Plane, where they no longer tilted relative to the gravitational vertical (they were always 90deg from it) and therefore could not use gravitational cues to determine their angular position from the balance point [2].  They could only rely on motion cues.  In this condition, they showed very poor performance, frequent loss of control, and a characteristic pattern of positional drifting.  While collectively they learned very little even across two experimental sessions on consecutive days, they did learn to reduce the number of destabilizing joystick deflections and crashes.  This led us to a series of experiments that supported the idea that there are 2 dissociable components to balance control [2]. All of these findings were also discovered in analogous yaw axis experiments [3]. Because of the absence of relevant gravitational cues in the Horizontal Roll Plane, 90% of subjects reported spatial disorientation and several reported unusual illusions, and therefore this condition has relevance to astronauts, pilots and patients with vestibular diseases.   

Are there any insights from the individual differences found in our spaceflight analog task?

Using machine learning, we were able to cluster all of the subjects who balanced in the Horizontal Roll Plane (spaceflight analog task) into 3 groups: Proficient, Somewhat-Proficient and Not-Proficient [5].  The Proficient group showed learning across majority of metrics, which was surprising because most reported feeling disoriented in our spaceflight analog condition.   What was very surprising was that the Not-Proficient group were not randomly bad!  They all converged at the same suboptimal strategy of making very large stereotyped joystick deflections which had the benefit of reducing the number of crashes at the cost of making everything else worse such as larger and crazier movements of the machine.  In this paper we were able to predict subjects'  final performance group with greater than 80% accuracy . 

Can we develop an effective training program to enhance performance in our spaceflight analog task?

Using insights from our  machine learning study [5] and from our finding that there are 2 dissociable components to balance control [2] we created an effective training program [4].  In this study we placed subjects in the Vertical Roll Plane because we wanted subjects to have continuous feedback about their angular position which they received from the gravitational cues.  However, we wanted to simulate the spaceflight analog condition where subjects can only rely on motion cues to find the balance point.  We did this by randomizing the location of the balance point so that they had to search the entire space while focusing on motion cues to find it.  After receiving this training, they performed statistically better in our spaceflight analog task and even retained the skilled motor learning 4 months later. 

What explains these individual differences in our spaceflight analog task?

This paper [6] was motivated by the question, ‘why were there so many individual differences in performance in my disorienting spaceflight analog task?’  I hypothesized that perhaps those participants who had a poor sense of their own orientation in the spaceflight condition were the ones who performed poorly.  Surprisingly, I found no correlation! It was really difficult to quantify the accuracy of a participant’s perception of their orientation! Check out the paper to see examples of the unbelievably unusual and different perceptions that participants had of their orientation.  Some participants felt like they were more than 180 degrees away from their actual location whereas others couldn’t even really say.  These results suggest that a general warning signal may not be an effective countermeasure for spatial disorientation because a pilot who perceives they are 180 degrees away will react very differently than a pilot who perceives that they are only 20 degrees away.  We need much more research on characterizing individual differences in perception which will allow us to customize and personalize countermeasures for spatial disorientation to each individual's unique perceptual profile. Read the paper, if you are interested in the neuroscience perspective of angular path integration and how it may be causing some of the error accumulation. Finally, we did find correlations between a person's spatial acuity in 'earth' conditions (after experiencing vestibular stimulation) and their ability to perform in the first few trials of the spaceflight condition. Our conclusion from this was that vestibular stimulation may be a valuable way to assess individual differences during initial exposure to a disorienting spaceflight condition.  This paper's work is also summarized in this video

Can machine learning and deep learning help?

We found that machine learning helped in classifying individual differences [5] (see the third paragraph for more information).  

Next, we wanted to know whether deep learning models that were trained on our data could predict when a participant would lose control and crash the  machine while balancing in our disorienting spaceflight analog condition [7].  We found that the deep learning model could predict the occurrence of a crash 800ms before it happend with 99% accuracy.  We also discovered that we could not accurately predict earlier than 800ms because people would do very unexpected and unpredictable joystick deflections that would throw the machine off balance. This was because they were disoriented and my prior research [6] shows how different each participant's perception of their orientation can be. 

We designed our model so that it had no prior knowledge of the paradigm and was not trained on optimal behavior. Instead our model was trained on the very unusual behavior of disoriented participants. We did this because we wanted a very general and adaptable AI.  During space exploration, astronauts will not have immediate ability to communicate with the Earth and because it is a novel environment we will not know all of the priors before going there.  Therefore, we need to develop AI that can learn as the humans learn and use a relatively small data set (collected by astronauts) to update the model. Here is a very short video that gives a brief overview and the details can be found in the paper below. 

Publications (click on links for pdf)

Other

Video Summary of my Work 

The two short videos (~5 mins) below provide a very quick look into my research.  The video below them is a much more thorough introduction into my research

The video below is from a virtual talk that I gave for the Brandeis Postdoc Summer Series, where I provided a holistic view of my research on spatial disorientation and its relevance to astronauts and space exploration, and, how it weaves into perspectives from neuroscience, computational neuroscience, skilled motor learning, psychology, dynamical systems (physics, math), computer science (machine learning) and human augmentation. 

Here are two articles somewhat related to my research written by BrandeisNow: