Research

Sensorimotor Cognitive Tasks: Mechanisms of Emotion, Motion, and Cognitive Control

To prevent, recover from, and cure mental illness, it is crucial to understand the complex behaviors that can become altered from normal to abnormal. This idea places the measurement of human behavior at the centerpiece of psychiatric research. Several behavioral tasks, such as Stroop, flanker, Stop Signal (SST), and go/No-go (GNG), are considered essential measures of cognitive control. Clarifying the brain mechanisms of cognitive control is crucial to understanding the pathophysiology of mental illnesses, including ADHD, substance use disorders, and psychosis. However, as measurement tools, these behavioral tasks are highly unreliable. The test-retest reliability of these tasks is subpar, and they cannot distinguish between gradations of disorders ranging from normal to abnormal. Additionally, they correlate poorly with circuit (fMRI), physiological (EEG and ERP), and self-report measures.

To overcome these limitations, our research proposes a novel approach using sensorimotor cognitive tasks (SMCT). Our SMCT integrates motor variables into traditional response time and accuracy measures. Participants navigate a computer cursor from the bottom of the screen to the top and indicate their responses by clicking a button. The SMCT records the trajectory of the cursor about every 15 milliseconds and extracts a variety of spatiotemporal features, which are used as dependent measures. Evidence suggests that the neural circuits linking the cortex, the basal ganglia (BG), and the thalamus (i.e., CBGT circuits) play a critical role in controlling motor, cognitive, and emotional behavior. With this in mind, our research aims to develop a theoretical and empirical foundation for sensorimotor cognitive tasks.

Affective Computing: Machine Learning, Deep Learning, and Brain-Computer Interface

If the control mechanisms for motion, emotion, and cognition are interconnected, is it possible that our movements reflect our thoughts, feelings, and identity? Moreover, can we create a computer program that detects people's emotions, moods, and beliefs by analyzing their movements and interactions with computers through a brain-computer interface? For instance, imagine studying algebra on a computer that tracks your brainwaves wirelessly to detect frustration and provides real-time feedback. Or, the computer analyzes your typing speed, cursor movements, and button clicks to assess your personality and offer feedback. Is this feasible, and if so, how? To investigate these questions, we use a range of methods, including trajectory analysis of the computer cursor, wireless EEG, facial expression analysis, and physiological measures like EDA and ECG. 

Conscious and Unconscious cognitive processing: How System 1 and System 2 communicate?

The acquisition of knowledge is not always a deliberate process, as new ideas, concepts, and beliefs can form without conscious awareness. This raises intriguing questions about how our conscious and unconscious minds interact. Specifically, how does System 1 communicate with System 2, and how does System 2 influence System 1? Recent studies demonstrate that even brief exposures to symbols, like the Facebook "like" button, can affect our conscious behavior, even when we are not aware of the symbol's presence. This phenomenon prompts us to consider the extent to which the unconscious mind can comprehend the deeper meanings of signs and symbols. Researchers explore these questions through subliminal semantic priming procedures that investigate how briefly presented stimuli, including signs, symbols, words, or sentences, can influence our computer mouse movements. 

Manifold Learning and Dimensionality Reduction

The human mind processes information from multiple sensory modalities. However, many behavioral studies still rely on outdated measures such as task accuracy and response time. To gain a better understanding of human behavior and brain function, it is crucial to analyze both physiological and behavioral signals from various sources. However, recording data from multiple sources in a single experiment is technically challenging and costly. One of the major challenges is integrating data obtained from different sources (data fusion). Our research addresses these challenges by developing new dimensionality reduction algorithms that can effectively integrate multimodal/multisensory data.