Research Objectives

RO1. Apply experimental phenomenology at proximal, peripersonal, and public scales of visual, auditory, and tactile trajectory production, to derive cross-sensory mappings that are perceptually consistent. Iterate the sensory translation process to derive supramodal trajectory invariants. Assess the mappings through controlled experimentation, for both single and multiple trajectories, simultaneously collecting physical and psycho-physiological data for trajectory modeling.

RO2. Apply machine learning methods to predict trajectories from either hand-crafted features, or from features discovered automatically by deep learning architectures, via embeddings found in their training phase. Discover automatically the importance of features of different nature (visual, auditory, tactile) for a wide range of trajectory-learning tasks involving different scales, and explore their variability. Extract trajectory-related information associated with missing features starting from other available features, and find maps relating pairs of such features. Develop optimal control and game-theoretical models of trajectory evolution for rational artificial agents emulating the behavior of human beings, and compare their predictions with real data.

RO3. Demonstrate the effectiveness of multisensory and multiscale trajectory projection by design and evaluation of applications in ecologically-valid and realistic contexts of use, namely sports training and participatory performing arts.