RaD-AI 2023

Program

The workshop  took place on May 30, 2023, as part of AAMAS 2023
Times are shown in GMT

8:20 Opening remarks

8:30 Global objectives (discussing "Adversarial")

Quantitative Planning with Action Deception in Concurrent Stochastic Games (AAMAS accepted paper)
Chongyang Shi, Shuo Han, and Jie Fu

Should my agent lie for me? A study on humans’ attitudes towards deceptive AI (AAMAS accepted paper)
Stefan Sarkadi, Peidong Mei, and Edmond Awad 

Certifiably Robust Policy Learning against Adversarial Multi-Agent Communication (ICLR accepted paper) Link
Yanchao Sun, Ruijie Zheng, Parisa Hassanzadeh, Yongyuan Liang, Soheil Feizi, Sumitra Ganesh and Furong Huang 

9:00 Liz Sonenberg: Mind the gap: Towards Rational Imperfection via `Imperfect’ Rationality (Invited talk)

I will reflect on requirements and computational mechanisms for designing agents with Theory of Mind capabilities that are to be involved in human-agent interactions. 


10:15 Coffee Break


10:45 Local objectives (discussing "Alignment")

Goal Alignment: Re-analyzing Value Alignment Problems Using Human-Aware AI (AAMAS accepted paper) Link
Malek Mechergui and Sarath Sreedharan 

Intelligent Disobedience: A Novel Approach for Preventing Human Induced Interaction Failures in Robot Teleoperation (HRI accepted paper)
Kavyaa Somasundaram, Andrey Kiselev, and Amy Loutfi

Stubborn: An Environment for Evaluating Stubbornness between Agents with Aligned Incentives Link

Ram Rachum, Yonatan Nakar and Reuth Mirsky

11:45 Plan recognition (discussing "Attention of agents")

Attention! A Dynamic Epistemic Logic Model for Inattentive Agents (AAMAS accepted paper) Link

Gaia Belardinelli and Thomas Bolander

Exploring the Cost of Interruptions in Human-Robot Teaming Link

Swathi Mannem, William Macke, Peter Stone, and Reuth Mirsky


12:30 Lunch Break


14:00 Joel Leibo: Conformity to Social Norms (Invited talk)

When does it make sense to conform to social norms? Why do social norms form in the first place? Why are they sometimes clearly useful, e.g. norms that encourage cooperation, and other times apparently pointless e.g. taboos prohibiting the eating of certain foods? In this talk I will describe a few recent studies using a computational model of normativity based on multi-agent reinforcement learning. This line of work answers some of these questions and sheds new light on the social meaning of rebellion and disobedience in the sense of this workshop.


15:00 Consistency check (discussing "Revision")

Agent-directed runtime norm synthesis (AAMAS accepted paper)
Andreasa Morris Martin, Marina De Vos, Julian Padget, and Oliver Ray 

Towards An Ethical Rebellion System Link
Ursula Addison, Matthew Molineaux and Othalia Larue 


15:45 Coffee Break


16:30 Matthias Scheutz: We don’t need no... rebel robots (Invited talk)

In this presentation, I will argue that we don't have nor want rebel robots, but instead need robots that have at least a rudimentary understanding of our normative expectations and can use them to determine whether human instructions should be rejected for normative reasons.

17:30 Mediation (discussing "Explainability")

Trusting artificial agents: communication trumps performance (AAMAS accepted paper)
Marin Le Guillou, Laurent Prévot and Bruno Berberian

Beyond Rejection Justification: the Case for Constructive Elaborations to Command Rejections by Autonomous Agents
Gordon Briggs

18:15 Wrap up session



 

 

Liz Sonenberg

Professor Liz Sonenberg is a Professor in the School of Computing and Information Systems at The University of Melbourne, Australia

Her research focus, with colleagues nationally and internationally, has been on foundations of teamwork in artificial intelligence (AI), especially mechanisms to support decision making in hybrid teams comprised of humans and software agents.  

She holds the Chancellery roles of Pro Vice Chancellor Research Systems and Pro Vice Chancellor Digital & Data, and is active in teaching and research in the Melbourne School of Engineering.  Previously, at the University of Melbourne she has been Head of the Department of Information Systems and Dean of the Faculty of Science. 


 

 

Joel Leibo

Dr Joel Leibo is a senior staff research scientist at DeepMind. He obtained his PhD from MIT where he studied computational neuroscience and machine learning with Tomaso Poggio. Joel was one of the first researchers to join DeepMind, starting as an intern in 2010, and then joining full time after finishing his PhD in 2013. 

He is interested in reverse engineering human biological and cultural evolution to inform the development of artificial intelligence that is simultaneously human-like and human-compatible. In particular, Joel believes cooperation is the quintessential human ability. 

 

 

Matthias Scheutz

Professor Matthias Scheutz is the Karol Family Applied Technology Professor of cognitive and computer science, director of the Human-Robot Interaction Laboratory and director of the human-robot interaction degree programs at Tufts University. 

 His current research focuses on complex ethical cognitive robots with natural language interaction, problem-solving, and instruction-based learning capabilities in open worlds. 

He has over 400 peer-reviewed publications in artificial intelligence, artificial life, agent-based computing, natural language understanding, cognitive modeling, robotics, human-robot interaction and foundations of cognitive science.

 

Organizing Committee

David Aha

Navy Center for Applied Research in AI
Naval Research Laboratory
Washington, DC; USA

Gordon Briggs

Navy Center for Applied Research in AI
Naval Research Laboratory
Washington, DC; USA

Reuth Mirsky

Dept. of Computer Science
Bar Ilan University
Israel

Ram Rachum

Dept. of Computer Science
Bar Ilan University
Israel

Kantwon Rogers

Dept. of Computer Science
Georgia Tech
USA


Peter Stone

Dept. of Computer Science
University of Texas at Austin
TX, USA

Sony AI