Our ability to interact with our environment seems limitless. We can learn to use keyboards—in the office using ten fingers, at home using only our thumb on the touchscreen of our phone. We can learn to play violin, to drive a car, to do heart surgery, and so on. According to the ideomotor principle of action control, such intention-based actions are performed to produce internally pre-specified and desired effects in the environment (see James, 1890; Stock & Stock, 2004). In this respect, any motor action would result in, or rather from, anticipating its perceptual consequences (Greenwald, 1970; Le Bars et al., 2016).
Pushing the frontiers of natural motor actions, recent advances in neuroscience and engineering are enabling human beings to directly act upon the environment with “thoughts” through Brain-Computer Interfaces (BCI). In a typical non-invasive BCI system, the user’s neural activity is recorded via brain imaging techniques (e.g. EEG, fNIRS, fMRI), before being decoded with computational and Artificial Intelligence (AI) methods. This last phase allows the translation of the brain signals into digital commands that are understandable by the connected device(s) (e.g., a computer, a robot etc.).
Besides the obvious benefit of BCIs for patients suffering from motor impairments, the dramatic expansion of this technology (see Douibi et al., 2021) raises important questions regarding the disembodied nature of resulting actions (Steinert et al., 2018). Notably, one might wonder whether it is even possible to qualify BCI-mediated actions as real human actions, given the potential reduction of sense of agency or responsibility it might cause in users (see Limerick et al., 2014). Moreover, it is worth noting that most of non-invasive BCI paradigms aim to enable “acting with thoughts” but do not necessarily respect fundamental aspects of neuroscientific models of human actions, especially regarding the perceptual counterpart of action, which remains barely considered in BCI-mediated actions (see Wang et al., 2019).
In the current project, we attempt to conciliate the neuroscientific models of human actions with non-invasive BCI methods by proposing an innovative and more naturalistic BCI paradigm that would notably take advantage of the ideomotor principle.
We believe that (1) adapting BCI paradigms could allow simple action-effect bindings and consequently action-effect predictions and (2) using neural underpinnings of those action-effect predictions as features of interest in Artificial Intelligence (AI) technics, could lead to more accurate and naturalistic BCI-mediated actions.