Modelling Intended Impact of Assistive Interactions
Explainability is first and foremost grounded in social interaction. While it is important to research transparent algorithms, understand causal attributions and design expressive interfaces when creating explainable agents, the target will always be to understand how to achieve a certain effect on a human perceiver. The need for an explanation only arises when part of the human's world model is flawed. For deciding when, what and how to communicate, it is useful to incorporate inference of human cognitive processes into an agent's behavior planning to assess the possible impact on this world model.
In this talk, I will present research across different applications that demonstrate how this can be approached and used for human-agent interactions. The talk will touch on work incorporating predicted impact of robot's and automated vehicles' actions on human belief, behavior policies, mutual understanding, and situational cost.
Thomas H. Weisswange is a Principal Scientist at the Honda Research Institute Europe in Offenbach, Germany. He has a strongly interdisciplinary background covering bioinformatics, computational neuroscience, intelligent transportation, machine learning, technology ethics, and human-robot interaction. Thomas’ current research projects address robot interactions with groups, human-robot cooperation, theory-of-mind, and intelligent systems design.
Coming soon ...