Workshop at the 33nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2024)


Large Language Models in the RoMan Age: 

Exploring Social Impact and Implications for Design and Ethics

Full-day hybrid workshop as part of the IEEE International Conference on Robot & Human Interactive Communication (RO-MAN 2024)

August 26, 2024 / Pasadena, CA, USA

Welcome to the 1st Workshop on
LLMs and HRI at RoMan!


The objectives of this workshop are to engage with questions arising at the intersection of Large Language Models (LLMs) and human-robot interaction (HRI), and in particular related to social impact, ethical considerations and design of personality and behaviour. 

The integration of LLMs into social robotics relates to a larger trend in generative AI that potentially represents a significant shift in the field of HRI and social robotics, where LLMs are being increasingly implemented in a multitude of different applications, ranging from well-being and education to healthcare and business. This research trend offers several notable advantages, such as personalising and improving interactions, in particular open-domain dialogue, improving sentiment analysis for affective and emotional interaction as well as reducing reliance on manual methods such as the Wizard of Oz technique. 

However, integrating LLMs with social robots brings new complexities in design, particularly in shaping robot personality and behaviour to align with the diverse social and cultural norms and contexts in social robots. This trend carries ethical and societal concerns and it is vital to rigorously examine the potential risks and ethical issues associated with using LLMs in social robots, especially considering the impact on vulnerable communities. 

This workshop aims to contribute to the field of social robotics and to the RO-MAN community by providing insights into key design, ethical, and social impact considerations of LLM-integrated social robots. The integration of LLM-based technologies into social robots is a research field still in its infancy, and by early assessing these considerations, researchers will be able to mitigate potential risks and harms while properly taking advantage of the benefits as the field advances. It is timely and necessary to give room to discuss the social impact, design, and ethical considerations around privacy, autonomy, consent, and the potential for dependency on robotic systems. From a societal standpoint, the session will highlight the impact of LLM-integrated social robots on social dynamics, cultural norms, and human behaviour, prompting discussions on how these robots could influence human relationships and societal structures. Furthermore, this workshop will also address the need of value-sensitive design and design justice approaches that prioritise consent, contestability, human values, needs, and ethical considerations. Lastly, this session will discuss the normativity in robotic design, be it embodied or computational, which encompassess theorising how to deal with normative decisions in HRI. The potential impact of this workshop is substantial. It will not only broaden the community’s understanding of the ethical and societal dimensions of the integration of LLMs into HRI but also shape future research directions that prioritise ethical considerations that might often be overlooked in the design and implementation of these emerging technologies. 

The workshop welcomes contributions from following themes (but not limited to):


Speakers

(Tokyo Tech)

(Lund University)

(Seoul National University)

TBC

Organisers

Alva Markelius

Alva Markelius is an AI Ethics & Society MSt student at the Leverhulme Centre for the Future of Intelligence, University of Cambridge and a research engineer at DICE - lab at University of Gothenburg and recipient of Top 100 Brilliant Women in AI Ethics™ award 2024. Her research interests are the ethics of social AI and robotics, embodied cognition, affect and emotion, gender and intersectionality, global AI narratives. She obtained her bachelor degree in cognitive science at the University of Gothenburg and Seoul National University, specialising in AI and robotics. 

ajkm4 [at] cam [dot] ac [dot] uk 

Laetitia Tanqueray

Laetitia Tanqueray is a PhD Candidate at the Department of Technology and Society, at Lund University Sweden. With a background in law and social sciences, Laetitia investigates human-robot interactions (HRI) within health care. Her published work has mostly focused on informing HRI design, including collaborations with roboticists. Laetitia is currently researching informal caregivers within HRI, with a particular focus on young carers in England and Wales.

laetitia [dot] ltanqueray [at] lth [dot] lu [dot] se

Yoon Kyung Lee is a sixth year Ph.D. candidate (ABD) in cognitive psychology at Seoul National University. She specialises in affective science, social cognition, language models, and empathic AI. She is currently Social Science/Humanities Research Professional at the University of Texas at Austin, and was a lecturer at the Samsung Design and Art Institute. Yoon Kyung received her B.S. in Psychology from the University of Iowa and her M.A. in Cognitive Psychology from Seoul National University. Her current projects focus on enhancing AI empathy in social robots and healthcare systems. 

yoonlee78 [at] snu [dot] ac [dot] kr & yklee [at] utexas [dot] edu

Yoonwon Jung

Yoonwon Jung is a first-year Ph.D. student at the Department of Cognitive Science at University of California, San Diego. She completed her B.A. in Psychology and M.A. in Cognitive Psychology at Seoul National University. During her master's, Yoonwon leveraged Natural Language Processing to study emotion and social cognition, while also conducting human-robot interaction research. Her current research focuses on exploring the cultural diversity in emotion concepts using LLM, and also on leveraging LLM to design better emotion models for social robots.

y5jung [at] ucsd [dot] edu

Dr. Robert Lowe

Robert is docent in Cognitive Science, and Associate Professor, at the Department of Applied IT, University of Gothenburg where he is head of the DICE research lab. He also work at the Humanized Autonomy unit at Research Institutes of Sweden (RISE), Gothenburg. His general research interests concern Cognition and Emotion in relation to Human-Interactive Technologies. He conducts research using empirical methods for data collection and Artificial Intelligence tools. 

robert [dot] lowe [at] gu [dot] se

Dr. Stefan Larsson 

Stefan Larsson is a senior lecturer and Associate Professor in Technology and Social Change at Lund University, Sweden, Department of Technology and Society. He leads a multidisciplinary research group that focuses on issues of trust and transparency and the socio-legal impact of autonomous and AI-driven technologies in various domains, such as on consumer markets, in the public sector, for health and social robotics. 

stefan [dot] larsson [at] lth [dot] lu [dot] se

IMPORTANT DATES