Welcome to my research page! I love idea generation that is super-saturated with deep discussions that explore vast unknown multidisciplinary creative spaces, both within and outside of science, some of which carry the hope of fertile collaborations. I love collaborations, so totally don't hesitate to contact me, my research interests go way beyond the research thrusts outlined below . On my contact page, you will find my email and links to social media accounts. If interested, you can check out my previous newsletters and even sign up to receive my monthly newsletter :)
I am a research scientist in the Ashton Graybiel Spatial Orientation Lab at Brandeis University. My research is focused on understanding human spatial disorientation and developing countermeasures that have applications for spaceflight, military aviation, and vestibular disorders. Below I give a compressed overview of my Research Thrusts:
Understanding Spatial Disorientation
Human Augmentation
Machine Learning/Artificial Intelligence
Educational Outreach/Community Engagement.
At the end is a list of my publications which link to downloadable pdfs.
In my first paper (Vimal et al. 2016) I studied how participants, with full gravitational cues (in the vertical roll plane, see figure below), learned to balance a large device (the Multi-Axis Rotation System) while blindfolded. I developed several metrics to quantify performance, finding that participants learned to use a more intermittent style of joystick deflection and they learned to make joystick deflections at the optimal phase relationship between angular position and velocity.
In Vimal et al. 2017 and 2018, I changed the paradigm so that participants would balance in an orientation where they no longer received gravitational cues that were relevant for the task (horizontal roll plane, see figure below). This caused them to become very disoriented. I found that participants collectively learned very little: learning to reduce their angular velocity, the rate of crashing and making destabilizing joystick deflections. To understand why the vertical roll plane has relevant gravitational cues and why the horizontal roll plane does not, please watch the videos below.
In my 2017 paper, I found that there was minimal transfer when going from an Earth analog condition (in the vertical roll plane where they had full gravitational cues) to the 0g analog condition (horizontal roll plane), however there was significant transfer when going from 0g to Earth-g. Together, this suggested that there are two processes to balance control. The first is aligning with gravitational vertical using gravitational cues and the second is minimizing velocity using motion cues. In the Earth analog condition, participants have access to both gravitational and motion cues, however in the 0g analog condition, participants only have access to motion cues.
In Vimal et al. 2019, I built on the insights from my 2017 where I discovered that participants in the 0g analog condition mainly relied on motion cues. How could I create a training program that improved balancing using only motion cues? In my first attempt, I trained participants in the Earth analog condition (vertical roll plane where they have both gravitational and motion cues) by giving them extended time to balance and also a series of advice on how to focus on the motion cues. However, when I put these trained participants into the 0g condition, there was almost no transfer of skill! This was likely because people relied primarily on the gravitational cues in the Earth analog condition and did not pay attention to the motion cues until they got to the 0g condition. How could I motivate the participants to almost ignore the gravitational cues while focusing on their motion cues when in the Earth condition? I accomplished this by creating a specialized training program where participants had to find hidden balance points that were not at zero degrees. This required that participants ignore their desire to align to gravitational vertical and instead focus on their motion cues to find the balance point. Once participant could do this, they performed much better in the 0g condition. Watch this video, at 12:28, for more information on the specialized training program. I later used this same specialized training program with my work on sensory augmentation devices (below).
Throughout all of the experiments I ran in the 0g spaceflight analog condition, I found huge amounts of individual differences. I hypothesized that perhaps those participants who had a poor sense of their own orientation in the spaceflight condition were the ones who performed poorly. Surprisingly, in Vimal et al. 2022, I found no correlation! It was really difficult to quantify the accuracy of a participant’s perception of their orientation! Check out the paper to see examples of the unbelievably unusual and different perceptions that participants had of their orientation. Some participants felt like they were more than 180 degrees away from their actual location whereas others couldn’t even really say. These results suggest that a general warning signal may not be an effective countermeasure for spatial disorientation because a pilot who perceives they are 180 degrees away will react very differently than a pilot who perceives that they are only 20 degrees away. We need much more research on characterizing individual differences in perception which will allow us to customize and personalize countermeasures for spatial disorientation to each individual's unique perceptual profile. Read the paper, if you are interested in the neuroscience perspective of angular path integration and how it may be causing some of the error accumulation. Finally, we did find correlations between a person's spatial acuity in 'earth' conditions (after experiencing vestibular stimulation) and their ability to perform in the first few trials of the spaceflight condition. Our conclusion from this was that vestibular stimulation may be a valuable way to assess individual differences during initial exposure to a disorienting spaceflight condition.
This paper's work is also summarized in this video.
I would love to find collaborators in any field including: computational modeling, psychology/neuro, computer science, imaging
Updating current models of angular path integration and the vestibular system. Current models of the vestibular system are wonderful and rich with well characterized features of the otoliths and semicircular canals. Interestingly, none of them are able to accurately predict the magnitude and extent of positional drifting that is seen in my data for the spaceflight analog condition in the Horizontal Roll Plane and Vertical Yaw Axis. One reason why might be because many of these models are based on data collected from single cycle profiles whereas my experiments all involve multiple cycles of movement. The other benefit of my experimental paradigm is that humans can indicate their perception by pressing a trigger button which cannot be done in the same way in animal models. I would love to conduct angular path integration experiments using my paradigm and use the results to update the current models.
Characterizing the huge individual differences in perception in the spaceflight analog condition. In my 2022 paper, I passively moved blindfolded subjects in the Horizontal Roll Plane as they pressed a trigger button to indicate everytime they perceived passing the start point. What was so surprising was the huge range of individual differences. Some subjects would press the trigger button twice as often as they did in the Vertical Roll Plane where they had a good sense of their angular position. These subjects reported feeling that they were always near the start point, even though most of the time they were very far away. In contrast, others almost never pressed the trigger button and reported feeling that after a few oscillations they were never near the balance point. Interestingly, even myself who knows that the machine operates between 60 and -60 degrees, felt as if I were making 360deg rotations. I think great value can be obtained by characterizing all of the different perceptions and determining their mechanisms.
Imaging, Attention and Spatial Disorientation. My prior work suggests that attention plays a large role in the ability to learn the spaceflight analog task. I would love to do imaging while running the experiment to quantify attention in real time while balancing. Perhaps an easier implementation would be to quantify attentional ability in a visual balancing task and then correlate it to performance in the machine. I am also interested in imaging people while they are spatially disoriented in my spaceflight condition and characterizing the results. .
Analogy of Dynamical Systems. If you have watched some of my presentations, you will see that I metaphorically relate stabilizing around the balance point as a homeostatic process. What is interesting is that in the Vertical Roll Plane, the gravitational vertical is like the homeostatic setpoint or the attractor point. As a person is pitched backwards into the Horizontal Roll Plane, this setpoint/attractor shatters and subjects show positional drifting. I am very interested in mathematically characterizing this process. Another interesting example question is what happens when a person has no strong perceived setpoint and is flushed with noise, does it lead to the nucleation of a setpoint? I think my paradigm has lots of potential to become a mathematical playground.
Vertical Roll Plane
Horizontal Roll Plane (spaceflight analog)
In Vimal et al. (2023) we explored whether vibrotactile feedback could enhance performance and perception in our disorienting 0g spaceflight analog task. We discovered that vibrotactile feedback significantly enhanced performance, however not to the same level as in our Earth analog task. One reason was because participants experienced a perceptual conflict that lead to confusion. This conflict arose because participant had an incorrect sense of their orientation which differed from the correct location indicated by the vibrotactors. These findings suggested that vibrotactile feedback may not be able to change perception of orientation (at least with our level of exposure). We were also surprised to find that when participants with vibrotactile feedback were given 40 minutes of balance time in the 0g spaceflight condition that they showed very minimal learning. Why would this be? The sensory substitution literature suggested that giving them time to do active sensing (free exploration) would be enough to create a bond between human and device. We, then, had another group of participants go through a specialized training program which taught them to ignore their vestibular sense while focusing on the vibrotactors (see details above). After the training, this group showed learning and improvement in the 0g analog condition. Together this means that training which aims to connect human and sensory augmentation device must not only have a free exploration component (active sensing) but also a component that teaches them to disassociate one sense while focusing on the feedback from the sensory augmentation device.
In Vimal et al. (2025) we extended our 2023 study by also examining Martian and Lunar analog conditions. We found that as the gravitational level decreased (Earth > Martian > Lunar > 0g), performance worsened and confusion increased. Once again, we found that vibrotactile feedback significantly improved performance but not the perceptual confusion. We gave one group extended time in the Lunar analog condition, and unlike the 2023 study in 0g, these participants showed significant learning without a specialized program. All together, this means that when there is some gravitational cue present, active sensing may be enough to connect human and device...however if there is no gravitational cue, then there needs to be the added component of teaching participants to diminish one sense while focusing on another.
Another interesting finding has been that participants report high levels of confusion on their perception of velocity in the 0g analog condition (and also Martian and Lunar analog conditions). This is unexpected because while gravitational cues detected by the otoliths are affected in our partial-g analog conditions, the motion cues detected by the semicircular canals should remain unaffected. This could be an understudied factor for spatial disorientation. We did find that confusion in velocity decreased with extended time in the Lunar analog condition, however confusion in position did not. This could be a sign that some form of short term reweighting is occurring.
Another surprising finding in Vimal et al. 2023 and 2025 was that participants reported very high levels of trust in the vibrotactile feedback, however they were unable to fully rely on the devices when in the 0g and partial-g conditions. This highlights an important point which is that cognitive trust (assessed through surveys) in a device or AI will not always mean that people will be able to use it, especially in stressful, fast paced disorienting conditions.
We found that the specialized training program in Vimal et al. 2023 helped bridge this gap between trust and reliance in the vibrotactile feedback. This suggests that we need to think about training and developing 'sub-cognitive' and 'gut-level' trust with the sensory augmentation devices and that cognitive trust will not be enough.
The effect of Technology on the trajectory of Human Destiny is uncertain. Some think that Artificial Intelligence (AI) will completely replace humans, possibly liberating Humanity from mundane and repetitive tasks. Others believe that Intelligent Augmentation (IA) is the best path where technology is used to enhance human performance and learning. For me, what fascinates me and what I find breathtakingly beautiful, is the Human Journey. Using the equipment that I have access to, I want to explore deeper questions that are fundamental to the interaction of Technology and Humanity. For example, at point does a human feel merged with a device/machine and what are the neural mechanisms and the time course for learning? What 'language' should a device use to communicate so that it enhances the human experience? What kind of training is needed to build both conscious and unconscious trust between human and technology? Does this trust break down during times of great anxiety and stress? Does the trust break down when there is sensory conflict and great uncertainty such as during spatial disorientation? If a human is augmented using different sensory systems, is all of that information really processed in parallel? What kinds of skills can generalize across different conditions of human augmentation?
I am more than happy to collaborate on any of the projects below or anything else. I would love to find collaborators in psychology, neuroscience, engineering, computer science and any other field.
Role of stress and anxiety and the effectiveness of vibrotactile cueing during spatial disorientation. The simplest experiment would be to survey each subject's level of anxiety and stress beforehand and see whether it correlates to performance and ability in the spaceflight analog condition while using vibrotactile cueing. The next level would involve measuring heart rate, cortisol and other related indicators. Additionally, based on some prior work, I hypothesize that those subjects who are experiencing high levels of stress will also be the ones who do not explore the full solution space and converge towards a suboptimal solution.
Individual differences in the perception of 'merging with the machine' and its relation to performance during spatial disorientation. I am very interested in the perceptual experiences of people as they use vibrotactile cueing. Do some feel as if they have merged with the machine/device and does this feeling have any utility in learning and performance? The simplest way of doing this would be to create a survey administered before the experiment and then correlate it to performance. I have more thoughts on sensory substitution, individual differences and spatial disorientation in this video.
The perils of vibrotactile cueing in a spatially disorienting condition. In my experiments, I find a wide range of perceptual experiences when subjects are spatially disoriented. Some even report feeling as if they are upside down when really they are on their backs. This could be because these subjects place greater emphasis on tactile information and the 5-point harness that is pushing on their shoulders make it seem as if they are upside down. There is a possibility that in certain situations that vibrotactile cueing may lead to more confusion when someone is disoriented. It would be interesting to explore this.
Providing cueing using methods beyond vibrations. I am definitely interested in providing sensory information using other methods and am always open to collaborate on that.
Human postural stabilization using multisensory cueing and active prosthesis. The Ashton Graybiel Spatial Orientation Lab has a history of doing research on human postural balancing. So, at some point, I want to see which of my findings generalize to human postural balancing.
The video above is a summary of Vimal et al. 2023 and Vimal et al. 2025
In all of my previous studies I noticed huge amounts of individual differences in performance. Could we use machine learning to predict who will perform well? In Vimal et al. 2020, we used machine learning (Gaussian Naive Bayes) to create a predictive classifier trained on early learning in the 0g spaceflight analog condition. We found that we could predict a person's final performance level with 80% accuracy. However, the most surprising finding happened when we used a Bayesian Gaussian Mixture method to cluster participants into three statistically distinct groups that represent Proficient, Somewhat Proficienct and Not Proficient performance. We found that the Not-Proficient participants were not just randomly bad but instead they had all developed the same sub-optimal strategy! All Not-Proficient participants used the joystick similarly: they smashed it really hard from side to side. Interestingly, doing this reduced the number of crashes they experienced in the 0g condition however it was at the cost of having very large and fast oscillations.
In Wang et al. 2022, we explored whether we could use deep learning to predict, in the future, when someone would lose control and crash when balancing in the 0g spaceflight analog condition. We used stacked gated recurrent units (GRU) to predict crash events 800 ms in advance with an AUC (area under the curve) value of 99%. When we prioritized reducing false negatives we found it resulted in more false positives. We found that false negatives occurred when participants made destabilizing joystick deflections that rapidly moved the MARS away from the balance point. These unpredictable destabilizing joystick deflections, which occurred in the duration of time after the input data, are likely a result of spatial disorientation. If our model could work in real time, we calculated that immediate human action would result in the prevention of 80.7% of crashes, however, if we accounted for human reaction times (∼400 ms), only 30.3% of crashes could be prevented, suggesting that one solution could be an AI taking temporary control of the spacecraft during these moments.
I would love to find collaborators in any field including: computer science, AI coder, Game Developer, Math/Physics
Generalization of our machine learning method to human postural balancing. As with any experiment, there is always the question whether the findings are specific to the experimental paradigm or whether it can generalize to other systems. In the Ashton Graybiel Spatial Orientation Lab there are other investigators who research human postural balancing and my future goal is to use a similar machine learning implementation to predict loss of control in human postural balancing in adverse and novel conditions.
Training an AI agent to balance: I would love to create a balancing game (on a cellphone, tablet or computer) where the human has to first train an AI agent in a simple balancing task. Then, the Human-AI team have to balance in more complex situations where there are multiple suboptimal solutions and only one optimal solution. I would love to map the solution space and see how different people explore the space. I want to use the individual differences in solution space exploration as a metric and see if it correlates with how people explore the solution space in my spaceflight condition. As mentioned in the previous section, I would also love to see whether individual differences in anxiety and stress predict how a person explores this solution space.
The video above is a summary of our 2020 paper
Because I used to be a high school teacher at Waltham High School, educational outreach and community engagement is a very central part of my identity. On my (rough draft) research related community engagement page, you will find greater description on how I mix my research with projects related to art, dance, and fashion to excite the community. In the past, I have founded, directed and ran a volunteer based program on giving high school student inquiry based research opportunities, WHS-Brandeis Summer Research Program.
I also try to spread my research through journalists who publish their articles like newspaper such as the Boston Herald and even through famous youtube videos, where one has reached nearly 1 million views. All of these articles can be found here.
I have also done a variety of talks that bring in multidisciplinary perspectives to my research, for example, I have spoken about how my research is relevant to:
Other
Mannan, S. A., Hansen, P., Vimal, V. P., Davies, H. N., DiZio, P., & Krishnaswamy, N. (2024, November). Combating Spatial Disorientation in a Dynamic Self-Stabilization Task Using AI Assistants. In Proceedings of the 12th International Conference on Human-Agent Interaction (pp. 113-122).
Mannan, S., Vimal, V. P., DiZio, P., & Krishnaswamy, N. (2024, May). Embodying Human-Like Modes of Balance Control Through Human-In-the-Loop Dyadic Learning. In Proceedings of the AAAI Symposium Series (Vol. 3, No. 1, pp. 565-569).
Pfaff, D., Martin, E.M., Weingarten, W. and Vimal, V., 2008. The central neural foundations of awareness and self-awareness. Progress of Theoretical Physics Supplement, 173, pp.79-98.
Quinkert, A. W., Vimal, V., Weil, Z. M., Reeke, G. N., Schiff, N. D., Banavar, J. R., & Pfaff, D. W. (2011). Quantitative descriptions of generalized arousal, an elementary function of the vertebrate brain. Proceedings of the National Academy of Sciences, 108(Supplement 3), 15617-15623.