Over the last decades, the rise of Artificial Intelligence (AI) has deeply reshaped our daily lives, especially through the intelligent automation of a growing number of processes such as driving, cooking, manufacturing or medical assisting. The constant growth of automation leads to obvious benefits regarding human safety, error prevention or industrial productivity. However, the interposition of AI systems between human agents and control processes also raises important issues for the sense of agency (SoA) of individuals operating in such increasingly automated societies.
SoA refers to the experience of controlling one’s own actions and through them, events in the outside world. This mechanism is crucial for the attribution of causal – as well as moral and legal – responsibility. Thus, one might legitimately wonder whether human agents do feel in control and responsible over actions and outcomes which are mainly achieved – and sometimes decided – by machines or systems. This aspect has been typically studied in the aeronautic field, where autopilot assistance reduces significantly the feeling of control in pilots. Such results are not trivial and raise major ethical concerns, notably in the military domain where lethal autonomous weapons systems are designed to lessen human intervention to its minimal level. Moreover, in the industrial domain, the full automation of certain tasks has been associated to a decrease in operators’ performance, known as the “out of the loop performance problem” (OOLPP), which is associated with a loss of vigilance and situational awareness. Recently, degraded state of performance and disengagement in operators have been proposed to be mediated by loss of agency, which has also been linked to significant biases on cognitive processes such as attention and memory. Thus, in line with previous findings, agency in HMI has been identified as a central Human Factor to be studied in order to make automation technologies more acceptable and useful to users.
Overall, individuals’ agency appears to be significantly impacted within basic HMI. Yet, SoA is a crucial mechanism not only for users’ experience and systems acceptability, but also with respect to the ethical and cognitive implications of its potential degradation. To our knowledge, there are no proper guidelines or framework dedicated to the development of AI and autonomous systems respectful of human agency. By elaborating on our recent findings, the current research project will aim to explore in a systematic manner the possible ways to restore sense of agency in operators interacting with AI systems, in naturalistic and ecologically valid settings, close to daily life situations. This main objective will rely on a two-step organization. First, we will aim to extract new metrics/proxies of agency in order to provide objective measures, adapted to naturalistic settings. Second, through a human-centered approach, we will study how diverse AI parameters (e.g., explainability and predictibility) could be optimized in order to enhance users’ sense of agency, by using ecologically valid HMI experiments (e.g., automated driving).