September 27th - October 2nd 2025 | Seoul, South Korea
These sessions aim to focus on the ethical and responsibility considerations for development of robotics foundation models, large-scale machine learning models capable of generalising across a wide range of embodiments and tasks. These models have the potential to quickly improve the capabilities and deployment options for robotic systems, unlocking new applications and accelerating adoption. Along with the development of specialised vision-language-action (VLA) robotics foundation models, standard large language models (LLMs) are also increasingly being integrated in embodied systems and studied for their impact on human-robot interaction (HRI), including in the generation of socioaffective behaviours and for deployment in social and care settings. Key capabilities of these models that contribute to more reliable and understandable robotic systems include multimodal understanding, structured reasoning, instruction adherence, natural language generation, and predicting appropriate actions for robotic control. These models have the potential to support the development of continuous, contextually grounded robot control policies that exhibit human-like adaptability.
However, the adoption of robotics foundation models requires careful consideration of the ethical and societal implications. The physical embodiment of these models raise concerns around the potential for mistakes that cause physical harm as well as the collection and commodification of behavioral and psychological data. Their performance in diverse environments also needs sufficient critical examination, as these models risk exhibiting performance disparities which may render spatial understanding and action prediction more reliable in certain contexts, limiting their potential adoption by a global user base. Sensitivities and risks also exist for HRI, with sycophantic or overly agreeable behaviours carrying the potential to undermine trust and meaningful engagement, particularly in contexts requiring critical or sensitive responses. These concerns are examples that highlight the need for robust, context-aware ethical discussion that takes into account not only the risk of physical harm from robotic systems, but also fairness, accountability, epistemic diversity, and contextual human-robot interaction.
We believe that these sessions will benefit the robot learning community by bringing together a diverse and multidisciplinary group of expert speakers and organisers who already have experience to share on responsible development for robotics foundation models. The key questions and topics that these sessions aims to address include:
Human-robot interaction. How can we ensure that robotics foundation models facilitate safe, intuitive, and trustworthy user interactions? What are the potential challenges in HRI arising from the use of these models in robotic embodiments? How can these models adapt to individual differences, preferences, or cultural norms?
Appropriate governance and oversight. What frameworks, guidelines, and best practices are needed for the responsible governance and oversight of robotics foundation models? How can we establish accountability and transparency in their development and deployment?
Protecting against misuse. What are the potential risks and unintended consequences associated with the misuse of robotics foundation models? How can we proactively identify and mitigate these risks?
Bias and fairness. How can we identify and mitigate biases embedded within robotics foundation models and the datasets they are trained on, ensuring fair and equitable adoption and outcomes across diverse user groups?
Safety and reliability. How can we ensure the safety and reliability of robotic systems powered by foundation models, particularly in complex and dynamic real-world environments beyond R&D settings?
Personalisation and adaptability. What methods allow robotics foundation models to personalize behavior in real time, accommodating individual differences (personality, age, gender, culture) and culturally-specific interaction styles? How can we design intuitive, user-friendly models that effectively integrate human factors?
Overall, we aim to create a space for learning and open discussion, with the goal of influencing future leaders working in industry, academia, and policy on robotics projects whose awareness and attention for these considerations are critical.