Research Thrusts
I have not updated this page for a long time! Currently I am working on human augmentation and here is my most recent paper and most recent presentation.
In this section I present the four major thrusts of my research. At the end of every thrust, I list a few ideas for potential future projects. As always, I am open to collaborations on ideas contained within this page or completely outside of it. My contact information is here.
To understand this page, because my research is cumulative, scan through my graduate research and then postdoctoral research, or read the summary in my Introduction.
Thrust 1: Human Augmentation
The effect of Technology on the trajectory of Human Destiny is uncertain. Some think that Artificial Intelligence (AI) will completely replace humans, possibly liberating Humanity from mundane and repetitive tasks. Others believe that Intelligent Augmentation (IA) is the best path where technology is used to enhance human performance and learning. For me, what fascinates me and what I find breathtakingly beautiful, is the Human Journey. Using the equipment that I have access to, I want to explore deeper questions that are fundamental to the interaction of Technology and Humanity. For example, at point does a human feel merged with a device/machine and what are the neural mechanisms and the time course for learning? What 'language' should a device use to communicate so that it enhances the human experience? What kind of training is needed to build both conscious and unconscious trust between human and technology? Does this trust break down during times of great anxiety and stress? Does the trust break down when there is sensory conflict and great uncertainty such as during spatial disorientation? If a human is augmented using different sensory systems, is all of that information really processed in parallel? What kinds of skills can generalize across different conditions of human augmentation?
Current Project:
Vibrotactile Cueing as a Countermeasure for Spatial Disorientation During Dynamic Stabilization in a Spaceflight Analog Environment
Astronauts will experience significant spatial disorientation when landing upon the surface of Mars or the Moon which could lead to mission ending fatalities. The leading cause of fatal aircraft accidents in military pilots is spatial disorientation. Enhancement through vibrotactile cueing has been studied in a wide variety of applications, such as, navigation, driving, providing alerts, postural stabilization, rehabilitation, and sports. Additionally, it has been shown that tactile cuing is useful in enhancing control of a motion platform, performance in helicopter flight, control of acrobatic flight in an aircraft, orientation of an astronaut in the ISS, and performance in a nulling task after returning from space. However, surprisingly there are very few controlled studies that have examined the role of vibrotactile cues in conditions that reliably create spatial disorientation. Little is known about its mechanisms, how it is learned, transferred and generalized. In this project we will place blindfolded subjects into our device that is programmed to behave like an inverted pendulum in the Horizontal Roll Plane. Our prior work shows that subjects will become very disoriented as they use a joystick to try to stabilize themselves. Subjects will wear a series of vibrotactors on their body that will vibrate to indicate distance from the balance point replacing the missing information typically obtained from gravitational cues detected by otoliths and touch receptors on the skin. We will study whether this vibrotactile cueing can immediate enhance performance and, if not, we will characterize the learning. We will also study whether training in a condition with gravitational cues while using vibrotactile cueing can transfer to the spaceflight analog condition. My hypothesis is that there will not be full transfer and instead a specialized training program with gravitational cues will be needed for full transfer to the spaceflight analog condition, such as the one described in my prior work.
Future Projects:
I am more than happy to collaborate on any of the projects below or anything else. I would love to find collaborators in psychology, neuroscience, engineering, computer science and any other field.
Role of stress and anxiety and the effectiveness of vibrotactile cueing during spatial disorientation. The simplest experiment would be to survey each subject's level of anxiety and stress beforehand and see whether it correlates to performance and ability in the spaceflight analog condition while using vibrotactile cueing. The next level would involve measuring heart rate, cortisol and other related indicators. Additionally, based on some prior work, I hypothesize that those subjects who are experiencing high levels of stress will also be the ones who do not explore the full solution space and converge towards a suboptimal solution.
Individual differences in the perception of 'merging with the machine' and its relation to performance during spatial disorientation. I am very interested in the perceptual experiences of people as they use vibrotactile cueing. Do some feel as if they have merged with the machine/device and does this feeling have any utility in learning and performance? The simplest way of doing this would be to create a survey administered before the experiment and then correlate it to performance. I have more thoughts on sensory substitution, individual differences and spatial disorientation in this video.
The perils of vibrotactile cueing in a spatially disorienting condition. In my experiments, I find a wide range of perceptual experiences when subjects are spatially disoriented. Some even report feeling as if they are upside down when really they are on their backs. This could be because these subjects place greater emphasis on tactile information and the 5-point harness that is pushing on their shoulders make it seem as if they are upside down. There is a possibility that in certain situations that vibrotactile cueing may lead to more confusion when someone is disoriented. It would be interesting to explore this.
Providing cueing using methods beyond vibrations. I am definitely interested in providing sensory information using other methods and am always open to collaborate on that.
Human postural stabilization using multisensory cueing and active prosthesis. The Ashton Graybiel Spatial Orientation Lab has a history of doing research on human postural balancing. So, at some point, I want to see which of my findings generalize to human postural balancing.
Thrust 2: Machine Learning and Artificial Intelligence
As mentioned in Thrust 1, my interest lies in the Human Journey. And so my research into machine learning and AI explore not how to replace humans but how to enhance them. The project outlined below is the one that was accepted for TRISH's GoForLaunch program and so it is one of my primary areas of focus for the coming year (12/2020-12/2021).
Current Project:
Machine Learning as a Countermeasure to Spatial Disorientation in a Spaceflight Analog Environment
Spaceflights, such as to the Moon and Mars, will cause many sensorimotor related difficulties that could jeopardize the mission. For example, if astronauts are forced to land manually onto the surface of Mars, they will experience a rapid gravitational transition while dynamically stabilizing the spacecraft. In low-g and 0g environments, gravitationally dependent vestibular and somatosensory cues are minimized and astronauts can easily become spatially disoriented. In our prior work we have used machine learning to predict performance and have developed an effective training program that enhances every person’s performance. However, very little has been done where machine learning algorithms identify the approach into instability and where it can provide feedback for optimal joystick control in a spaceflight analog condition that reliably produces spatial disorientation. Future work will focus on developing a real-time human-machine interaction to mitigate effects of spatial disorientation. This work is in collaboration with Dr. Pengyu Hong, a Computer Science professor at Brandeis who specializes in machine learning. This proposal is relevant to the following NASA roadmap gaps: HHC1, HHC2, SM103, SM202, HSIA401, HSIA701
Specific Aim 1: Using machine learning to predict destabilizing joystick deflections, loss of control and crashes in a spaceflight analog environment.
Blindfolded subjects will balance themselves in a device programmed with inverted pendulum dynamics. They will be placed in an orientation where they cannot use gravitational cues and where they become spatially disoriented. We will use a variety of different machine learning techniques, such as recurrent artificial neural networks, to determine how early we can predict the occurrence of what would be a potentially fatal mission ending event, such as destabilizing joystick deflections, loss of control, and crashes. These findings could be translated into more realistic flight training simulations for development of a warning system that could alert an astronaut in real time before a critical mistake is about to be made.
Specific Aim 2: Using machine learning to develop an optimal model controller that can suggest the best joystick deflection.
We will use machine learning to predict the next optimal joystick deflection. Because our dataset is relatively sparse for the purpose of creating a full representation of the solution space, we will innovate new ways of using prior knowledge and using Bayesian techniques to build the machine learning model. These innovations will allow machine learning techniques to be relevant for NASA projects that do not have big data. In the future, we will be able to provide real time feedback where a computer and human are in the loop, and where the computer helps in learning and performance in a disorienting spaceflight analog condition.
Future Projects:
I would love to find collaborators in any field including: computer science, AI coder, Game Developer, Math/Physics
Generalization of our machine learning method to human postural balancing. As with any experiment, there is always the question whether the findings are specific to the experimental paradigm or whether it can generalize to other systems. In the Ashton Graybiel Spatial Orientation Lab there are other investigators who research human postural balancing and my future goal is to use a similar machine learning implementation to predict loss of control in human postural balancing in adverse and novel conditions.
Training an AI agent to balance: I would love to create a balancing game (on a cellphone, tablet or computer) where the human has to first train an AI agent in a simple balancing task. Then, the Human-AI team have to balance in more complex situations where there are multiple suboptimal solutions and only one optimal solution. I would love to map the solution space and see how different people explore the space. I want to use the individual differences in solution space exploration as a metric and see if it correlates with how people explore the solution space in my spaceflight condition. As mentioned in the previous section, I would also love to see whether individual differences in anxiety and stress predict how a person explores this solution space.
Thrust 3: The Basic Science of Spatial Disorientation
Current Projects:
The role of spatial acuity in a dynamic balancing task without gravitational cue
This recently finished project is motivated by the question, ‘why were there so many individual differences in performance in my disorienting spaceflight analog task?’ I hypothesized that perhaps those participants who had a poor sense of their own orientation in the spaceflight condition were the ones who performed poorly. Surprisingly, I found no correlation! It was really difficult to quantify the accuracy of a participant’s perception of their orientation! Check out the paper to see examples of the unbelievably unusual and different perceptions that participants had of their orientation. Some participants felt like they were more than 180 degrees away from their actual location whereas others couldn’t even really say. These results suggest that a general warning signal may not be an effective countermeasure for spatial disorientation because a pilot who perceives they are 180 degrees away will react very differently than a pilot who perceives that they are only 20 degrees away. We need much more research on characterizing individual differences in perception which will allow us to customize and personalize countermeasures for spatial disorientation to each individual's unique perceptual profile. Read the paper, if you are interested in the neuroscience perspective of angular path integration and how it may be causing some of the error accumulation. Finally, we did find correlations between a person's spatial acuity in 'earth' conditions (after experiencing vestibular stimulation) and their ability to perform in the first few trials of the spaceflight condition. Our conclusion from this was that vestibular stimulation may be a valuable way to assess individual differences during initial exposure to a disorienting spaceflight condition.
Scroll down to the publication section of this page to access the paper (#6).
This paper's work is also summarized in this video.
Future Projects:
I would love to find collaborators in any field including: computational modeling, psychology/neuro, computer science, imaging
Updating current models of angular path integration and the vestibular system. Current models of the vestibular system are wonderful and rich with well characterized features of the otoliths and semicircular canals. Interestingly, none of them are able to accurately predict the magnitude and extent of positional drifting that is seen in my data for the spaceflight analog condition in the Horizontal Roll Plane and Vertical Yaw Axis. One reason why might be because many of these models are based on data collected from single cycle profiles whereas my experiments all involve multiple cycles of movement. The other benefit of my experimental paradigm is that humans can indicate their perception by pressing a trigger button which cannot be done in the same way in animal models. I would love to conduct angular path integration experiments using my paradigm and use the results to update the current models.
Characterizing the huge individual differences in perception in the spaceflight analog condition. In my most recent paper, I passively moved blindfolded subjects in the Horizontal Roll Plane as they pressed a trigger button to indicate everytime they perceived passing the start point. What was so surprising was the huge range of individual differences. Some subjects would press the trigger button twice as often as they did in the Vertical Roll Plane where they had a good sense of their angular position. These subjects reported feeling that they were always near the start point, even though most of the time they were very far away. In contrast, others almost never pressed the trigger button and reported feeling that after a few oscillations they were never near the balance point. Interestingly, even myself who knows that the machine operates between 60 and -60 degrees, felt as if I were making 360deg rotations. I think great value can be obtained by characterizing all of the different perceptions and determining their mechanisms.
Imaging, Attention and Spatial Disorientation. My prior work suggests that attention plays a large role in the ability to learn the spaceflight analog task. I would love to do imaging while running the experiment to quantify attention in real time while balancing. Perhaps an easier implementation would be to quantify attentional ability in a visual balancing task and then correlate it to performance in the machine. I am also interested in imaging people while they are spatially disoriented in my spaceflight condition and characterizing the results. .
Analogy of Dynamical Systems. If you have watched some of my presentations, you will see that I metaphorically relate stabilizing around the balance point as a homeostatic process. What is interesting is that in the Vertical Roll Plane, the gravitational vertical is like the homeostatic setpoint or the attractor point. As a person is pitched backwards into the Horizontal Roll Plane, this setpoint/attractor shatters and subjects show positional drifting. I am very interested in mathematically characterizing this process. Another interesting example question is what happens when a person has no strong perceived setpoint and is flushed with noise, does it lead to the nucleation of a setpoint? I think my paradigm has lots of potential to become a mathematical playground.
Thrust 4: Educational Outreach
Because I used to be a high school teacher at Waltham High School, educational outreach is a very central part of my identity. My vision statement provides a holistic view of my ideas. Overall, I want to create opportunities for all interested students that allows them to bloom and feel like they truely have the ability to create something of beauty. From the research side, this means that I teach students how to search for and comprehend scientific literature, and then how to use that literature review to build a research idea that has never been published before. I help them implement those ideas with minimal resources and teach them how to interpret their data. I did this through the WHS-Brandeis Summer Research Program which I founded and ran and then through the WHS Research Course.