Encouraging Human Challenge through Robotic Illusions: Motivation, Self-Efficacy, and White Lies
Abstract
As robots increasingly support people in rehabilitation, mobility, and daily activities, the central design question is not only how to assist, but also how to encourage people to challenge themselves. In human-centered robotics, excessive assistance can reduce initiative, whereas appropriately designed interactions can enhance motivation, engagement, and self-efficacy. This talk discusses a technological perspective on how robots can foster the feeling of “I can do this” and thereby support human growth.
Drawing on research from Japan’s Moonshot R&D Program on adaptable AI-enabled robots, the talk introduces methods for designing robot behaviors that promote proactive human action rather than passive dependence. In particular, it focuses on the role of illusion in human–robot interaction: how robots may intentionally shape a user’s perception of success, ability, or progress in order to sustain challenge and learning. This raises a difficult ethical question: when does such design constitute a beneficial white lie, and when does it become unacceptable deception? The talk examines this boundary and argues that future human-centered robots must be designed not only for physical assistance, but also for the careful and responsible orchestration of motivational illusions.
Bio
Yasuhisa Hirata is a Professor in the Graduate School of Engineering at Tohoku University, Sendai, Japan. He received his B.E., M.E., and Ph.D. degrees in mechanical engineering from Tohoku University in 1998, 2000, and 2004, respectively. His research interests include human–robot interaction, multi-robot coordination, and factory automation. He serves as a Project Manager of Japan’s Moonshot R&D Program. He has also served as an Administrative Committee (AdCom) member of the IEEE Robotics and Automation Society (RAS) and currently serves as Chair of the IEEE RAS Technical Committee Cluster on Health and Medical Robotics.
Human-First Innovation for Physical AI: Principles for Human–Robot Interaction in the Age of AI
Abstract
As artificial intelligence becomes increasingly embedded in physical systems and robots, questions of ethics, design, and governance become central to the future of human–robot interaction. This talk introduces the concept of Human-First Innovation, a framework that emphasizes designing AI and robotic systems that enhance human well-being, agency, and creativity.
Drawing on international research on young people and AI, including projects conducted with the United Nations and Japan’s Moonshot R&D Program, the talk explores emerging opportunities and risks in AI-mediated societies. It introduces Self-Creation as a key concept for understanding how individuals reflexively shape their identities and life trajectories in increasingly AI-mediated environments. The talk concludes by outlining normative principles for Human-First Physical AI and discussing their implications for the design of future human–robot societies.
Bio
Toshie Takahashi is Professor at Waseda University and Associate Fellow at the Leverhulme Centre for the Future of Intelligence (CFI), University of Cambridge. Her research explores the societal, ethical, and cultural implications of AI and robotics, focusing on human-centered AI and the design of future AI societies. She has collaborated with the United Nations on global research and dialogue on youth and AI. She has also contributed to Japan’s JST Moonshot project AIREC, which explored ethical design and the societal implications of human–robot interaction. Website: https://www.toshietakahashi.com
What Could Possibly Go Wrong? A Case Study in Responsible Robotics
Abstract: Robot accidents are inevitable. In this talk I will outline a framework for social robot accident investigation; a framework that proposes both the technology and processes that would allow social robot accidents to be investigated and lessons learned. I shall describe a series of simulated robot accidents and investigations, enacted with volunteers and real robots. And to conclude I will position accident investigation within the practice of responsible robotics and argue that social robotics without accident investigation would be no less irresponsible than aviation without air accident investigation.
Bio: Alan Winfield is Professor of Robot Ethics at the University of the West of England (UWE), Bristol, UK, Visiting Professor at the University of York, and Associate Fellow of the Cambridge Centre for the Future of Intelligence. Alan co-founded the Bristol Robotics Laboratory where his research is focussed on the science, engineering and ethics of cognitive robotics. Alan is an advocate for robot ethics; he chairs the advisory board of the Responsible Technology Institute at the University of Oxford and has co-drafted new standards on ethical risk assessment and transparency.