2nd ICRA 2026 Workshop on Robot Ethics
Ethical, Legal and User Perspectives in Robotics & Automation WOROBET
June 1 – Full-Day Workshop
Ethical, Legal and User Perspectives in Robotics & Automation WOROBET
June 1 – Full-Day Workshop
Contributions are welcome
We are witnessing the transition of robots from labs to publicly accessible spaces where they interact more with a diverse range of people in different contexts. This requires an increased focus on Human-Robot-Interaction, raising inherent ethical and legal issues. The workshop will enable participants to better understand the impact of different assessments and potential measures concerning robot ethics on design and deployment. Gaining insight into current regulations, standards and initiatives addressing ethical and legal issues will benefit both researchers and developers. That is by considering how these can open new directions in robotics and automation research.
Being able to employ measures to address the implications and issues is vital to making robots more acceptable and trustworthy, and fundamental for responsible research and innovation. The workshop will cover key aspects through concrete examples from ongoing research projects, and applied work on legal considerations, development of relevant standards, universal design principles and more. This will be from an international perspective, with speakers representing the global north and south and with emphasis on gender, cultural and ethnic diversity.
There is no specific prerequisite knowledge required on ethics, legal, and social issues (ELSI). Thus, the workshop will target all attendees of the ICRA 2026 conference.
The authors of the accepted papers (four pages or more) will be invited to submit an extended version for tentative inclusion in a book to be published in the Springer Proceedings in Advanced Robotics book series (approved). The papers would be peer-reviewed as regular journal papers. One or more summaries of selected topics covered at the workshop will also be submitted as an opinion or comment article to a journal.
Workshop Objectives
The main objective of the workshop is to raise awareness, prompt debate and share knowledge about ethical, legal and user/social perspectives for robot assistants operating in personal and public environments with humans.
Artificial Intelligence (AI) technologies, including robots, pose challenges and opportunities for health- and home care. Amongst the relevant and essential aspects currently discussed are privacy, cybersecurity, safety, diversity, and inclusion considerations. There is increasing attention on the ethical implications and legal issues related to robots and systems. Recently, the European Commission has approved the regulations on Artificial Intelligence, e.g., the Artificial Intelligence Act (AIA), and the New Machinery Directive (MD). The AIA is the first of this kind in the world and will highly impact AI-based systems, including intelligent robots and other software that will be used, developed, or imported to Europe. Therefore, the AIA will contain key requirements for a global market for robots and AI systems.
Other important ongoing efforts are defining standards for intelligent systems and studying design with user participation. It is important to determine what legal frameworks need to be followed to ensure user safety in highly robotic environments. The workshop will provide an overview of the most pressing ethical and legal challenges surrounding the development and use of robots in human environments.
The workshop aims to raise awareness about these topics and engage with the community to think about mitigating risks and ways to reduce the unfavorable impact on society. The workshop will illustrate the challenges related to privacy, security, safety, and diversity of users through several examples.
Invited speakers
Toshie Takahashi, Waseda University, Japan
Title: Human-First Innovation for Physical AI: Principles for Human–Robot Interaction in the Age of AI
Abstract
As artificial intelligence becomes increasingly embedded in physical systems and robots, questions of ethics, design, and governance become central to the future of human–robot interaction. This talk introduces the concept of Human-First Innovation, a framework that emphasizes designing AI and robotic systems that enhance human well-being, agency, and creativity.
Drawing on international research on young people and AI, including projects conducted with the United Nations and Japan’s Moonshot R&D Program, the talk explores emerging opportunities and risks in AI-mediated societies. It introduces Self-Creation as a key concept for understanding how individuals reflexively shape their identities and life trajectories in increasingly AI-mediated environments. The talk concludes by outlining normative principles for Human-First Physical AI and discussing their implications for the design of future human–robot societies.
Bio
Toshie Takahashi is Professor at Waseda University and Associate Fellow at the Leverhulme Centre for the Future of Intelligence (CFI), University of Cambridge. Her research explores the societal, ethical, and cultural implications of AI and robotics, focusing on human-centered AI and the design of future AI societies. She has collaborated with the United Nations on global research and dialogue on youth and AI. She has also contributed to Japan’s JST Moonshot project AIREC, which explored ethical design and the societal implications of human–robot interaction. Website: https://www.toshietakahashi.com
Alan Winfield, University of the West of England, Bristol, UK
Title: What Could Possibly Go Wrong? A Case Study in Responsible Robotics
Abstract: Robot accidents are inevitable. In this talk I will outline a framework for social robot accident investigation; a framework that proposes both the technology and processes that would allow social robot accidents to be investigated and lessons learned. I shall describe a series of simulated robot accidents and investigations, enacted with volunteers and real robots. And to conclude I will position accident investigation within the practice of responsible robotics and argue that social robotics without accident investigation would be no less irresponsible than aviation without air accident investigation.
Bio: Alan Winfield is Professor of Robot Ethics at the University of the West of England (UWE), Bristol, UK, Visiting Professor at the University of York, and Associate Fellow of the Cambridge Centre for the Future of Intelligence. Alan co-founded the Bristol Robotics Laboratory where his research is focussed on the science, engineering and ethics of cognitive robotics. Alan is an advocate for robot ethics; he chairs the advisory board of the Responsible Technology Institute at the University of Oxford and has co-drafted new standards on ethical risk assessment and transparency.