Human-First Innovation for Physical AI: Principles for Human–Robot Interaction in the Age of AI
Abstract
As artificial intelligence becomes increasingly embedded in physical systems and robots, questions of ethics, design, and governance become central to the future of human–robot interaction. This talk introduces the concept of Human-First Innovation, a framework that emphasizes designing AI and robotic systems that enhance human well-being, agency, and creativity.
Drawing on international research on young people and AI, including projects conducted with the United Nations and Japan’s Moonshot R&D Program, the talk explores emerging opportunities and risks in AI-mediated societies. It introduces Self-Creation as a key concept for understanding how individuals reflexively shape their identities and life trajectories in increasingly AI-mediated environments. The talk concludes by outlining normative principles for Human-First Physical AI and discussing their implications for the design of future human–robot societies.
Bio
Toshie Takahashi is Professor at Waseda University and Associate Fellow at the Leverhulme Centre for the Future of Intelligence (CFI), University of Cambridge. Her research explores the societal, ethical, and cultural implications of AI and robotics, focusing on human-centered AI and the design of future AI societies. She has collaborated with the United Nations on global research and dialogue on youth and AI. She has also contributed to Japan’s JST Moonshot project AIREC, which explored ethical design and the societal implications of human–robot interaction. Website: https://www.toshietakahashi.com
What Could Possibly Go Wrong? A Case Study in Responsible Robotics
Abstract: Robot accidents are inevitable. In this talk I will outline a framework for social robot accident investigation; a framework that proposes both the technology and processes that would allow social robot accidents to be investigated and lessons learned. I shall describe a series of simulated robot accidents and investigations, enacted with volunteers and real robots. And to conclude I will position accident investigation within the practice of responsible robotics and argue that social robotics without accident investigation would be no less irresponsible than aviation without air accident investigation.
Bio: Alan Winfield is Professor of Robot Ethics at the University of the West of England (UWE), Bristol, UK, Visiting Professor at the University of York, and Associate Fellow of the Cambridge Centre for the Future of Intelligence. Alan co-founded the Bristol Robotics Laboratory where his research is focussed on the science, engineering and ethics of cognitive robotics. Alan is an advocate for robot ethics; he chairs the advisory board of the Responsible Technology Institute at the University of Oxford and has co-drafted new standards on ethical risk assessment and transparency.