Abstract: Datasets, algorithms, machine learning models and AI powered tools that are used for perception, prediction and decision making constitute the core of autonomous robotics and human-robot interaction. Majority of these are prone to data or algorithmic bias (e.g., along the demographic attributes of race, age, gender etc.) that could have catastrophic consequences for various members of the society. Therefore, making considerations and providing solutions to avoid and/or mitigate these are of utmost importance for creating and deploying fair and unbiased agents and robots that rely on such systems in their interactions with human users.
This talk will present the Cambridge Affective Intelligence and Robotics (AFAR) Lab's (https://cambridge-afar.github.io/) research explorations in this area while emphasising the need for deployment for real world problems and settings, for example robotised child mental wellbeing assessment and robotic wellbeing coaching in the workplace.
Bio: Hatice Gunes (https://www.cst.cam.ac.uk/people/hg410 ) is a Full Professor of Affective Intelligence and Robotics at the University of Cambridge's Department of Computer Science and Technology. She is an internationally recognized scholar in affective computing and social robotics, and a former President of the Association for the Advancement of Affective Computing. At the Affective Intelligence and Robotics Lab (AFAR Lab), Prof Gunes spearheads award-winning research on multimodal, social, and affective intelligence for AI systems and robots, with a strong focus on wellbeing, ethics, fairness and explainability. Her work has earned numerous accolades, including, Honouree for Sony Women in Technology Award with Nature 2025, Best Student Paper Award Finalist @ IEEE FG’24, Best Paper Award in Responsible Affective Computing @ ACII’23, RSJ/KROS Distinguished Interdisciplinary Research Award Finalist @ IEEE RO-MAN’21, Best Paper Award Finalist @ IEEE RO-MAN’20, Outstanding Paper Award @ IEEE FG’11, and Best Demo Award @ IEEE ACII’09.
Abstract: The talk will focus on the European Commission's efforts to operationalize high-level ethical principles and requirements in the domain of research and its efforts to create a first set of specialized guidelines in the domains on fairness, algorithmic auditing and the assessment of the impact upon fundamental rights. In addition, the challenges and the opportunities associated with the gradual implementation of EU's AI Act will be highlighted along with the particularly important work of other international organizations in the ethical governance of new and emerging technologies. The presentation will shed light on the importance of developing a specific, distinct framework for ethical assessment in the field of artificial intelligence that could enhance EU's efforts to materialize the development of a human-centered and trustworthy model of AI research and development which will be aligned to its efforts to promote responsible innovation.
Bio: Dr. Mihalis Kritikos is a Policy Analyst at the Ethics and Integrity Sector of the European Commission (DG-Research and Innovation) working on the ethical development of emerging technologies with a special emphasis on AI Ethics and Value Alignment. He is also the new Secretary of the European Group of Ethics in Science and New Technologies (EGE), an independent body appointed by the President of the European Commission to provide advice on all aspects of Commission policies and legislation where ethical, societal and fundamental rights dimensions intersect with the development of science and new technologies. Mihalis is also Senior Fellow at the Brussels School of Governance (VUB) on digital ethics and rights, digital transition, AI Governance and Values and responsible innovation and author of 2 books in the field of disruptive technologies (Ethical AI Surveillance in the Workplace, Emerald, 2023 and EU Policy-Making on GMOs: The False Promise of Proceduralism, Palgrave Macmillan, 2018).
Abstract: In this talk, I will pursue a dual mission: First, as a call to action for researchers in the community, I will emphasize the fundamental importance of considering ethics and the ethical implications in everyday research practices. Second, I will discuss specific ethical issues that are particularly relevant to the fields of social robotics and human-robot interaction. In this context, ethical and social challenges arising from the integration of robots into our daily lives will be highlighted.
Bio: Prof. Friederike Eyssel is Professor of Psychology and head of the research group “Applied Social Psychology and Gender Research” at the Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Germany. She has held visiting professorships in social psychology at the University of Münster, the Technical University of Dortmund, the University of Cologne, and the New York University Abu Dhabi. Friederike is passionate about basic and applied social psychological research, and she is interested in various research topics ranging from social robotics, trust, and acceptance of novel technologies to attitudes and attitude change. She is co-author of various textbooks on social robots, among them “HRI: An introduction (2020, 2024 Cambridge University Press, Hanser, 2020, 2022), “Robots in Education” (2021, Routledge), and “Theory and practice of sociosensitive and socioactive systems” (2022, Springer). Friederike Eyssel has received the Bielefeld University Award for sustainable engagement for Gender Equality in 2021. Furthermore, she received the Bielefeld University Karl-Peter Grotemeyer Prize for Excellence in Teaching in 2021 and the IEEE-RAS Cognitive Robotics Distinguished Lecturer Award, 2022-2024 and 2025-2027. She has been an active part of the social robotics community, for instance, as Associate Editor for ACM “Transactions in Human-Robot Interaction” and Section Editor for “Robotics”. Friederike is Chairwoman of Bielefeld University's Ethics Review Board and will share expertise from this context in her talk.
Abstract: Trust in AI technology relies on a trust-building process that considers both context and the practical reasoning of users. This presentation uses examples from social robots concerning data acquisition and usage and extends these findings to other AI technologies, such as LLMs. It argues that trust is context-dependent and that any development aimed at creating trustworthy AI can benefit from continuous monitoring and local fine tuning, based on transparent, locally defined AI principles.
Bio: M. Antonietta Grasso is Principal Scientist at NAVER LABS Europe, located in France and owned by the Korean internet company Naver. NAVER LABS Europe is the largest private AI lab in France. Her work focuses on research in Human-Machine Interaction and Human-Robot Interaction. As Principal Scientist, she plays a cross-functional role across the various research groups within the Interactive Systems group, aiming to define interdisciplinary projects for the application and evaluation of machine learning techniques. Her research contribution lies in identifying user needs and informing technology development to meet them, mainly through qualitative studies. Recently, she has been studying the Ethics of AI to inform the design of privacy aware robotics and trustworthy interaction with AI enabled technology.
Robots, Humans and the Law
Abstract: This talk surveys how European law addresses human–robot interaction. Recent initiatives have created new rules for artificial intelligence and updated the legal framework for machinery, with a special emphasis on safety. At the same time, general laws apply. For example the General Data Protection Regulation (GDPR) applies when robots collect or process personal data. Rather than regulating robotics as a distinct field, the law addresses it through separate regimes—machinery, AI, and data protection—each with its own priorities. This fragmented approach highlights a central challenge: the categories used in law do not fully align with how robotics is developed and deployed. Understanding this gap is crucial for interdisciplinary discussions about how human–robot interaction is shaped in practice.
Bio: Professor Tobias Mahler is director of the Norwegian Research Center for Computers and Law [jus.uio.no]. His current research focuses on regulating robotics and artificial intelligence [jus.uio.no], as well as on risk-based approaches in law, including in the context of digital identity [jus.uio.no]. He has advised the EU Commission [op.europa.eu] in its preparation [eur-lex.europa.eu] for the Digital Services Act. Mahler is co-founder of the Legal Innovation Lab Oslo (LILO [jus.uio.no]) at the University of Oslo. He has been guest researcher at various universities, including Stanford Law School and the Max Planck Institute for Foreign and International Criminal Law.