Program

1st International Workshop on Deceptive AI @ECAI2020

30 August 2020

Santiago de Compostela, Spain

30 August 2020

Pre-recorded presentations of the accepted papers can be found here (registration with Digital ECAI required): https://underline.io/events/24/sessions?eventSessionId=306

Join the discussion on the DeceptECAI2020 Slack channel and ask questions to the authors, speakers, and panelists. This is the place where where the chairs, the presenters, and the attendants will interact asynchronously.

Morning Session 10am - 1pm CEST Time

10:15-10:30 Zoom Meeting Opening

10:30-10:40 Welcome

10:40-11:00 Characterising deception in AI: a survey - Peta Masters, Wally Smith, Liz Sonenberg, and Michael Kirley

11:00-11:20 The role of environments in affording deceptive behaviour: some preliminary insights from stage magic - Wally Smith, Michael Kirley, Liz Sonenberg, and Frank Dignum

11:20-11:40 Coffee/Tea Break

11:40-12:00 A framework to challenge and justify decisions based on machine learning algorithms - Clément Henin and Daniel Le Métayer

Invited Talk

12:00-1:00 Lies, deception, and computation - Hans van Ditmarsch

Afternoon/Evening Session 5pm - 8pm CEST Time

5:00-6:00 Discussion Panel -How do we hold deceptive machines accountable?

6:00-6:20 Studying Dishonest Intentions in Brazilian Portuguese Texts - Francielle Alves Vargas and Thiago Alexandre Salgueiro Pardo

6:20-6:40 Coffee/Tea Break

6:40-7:00 E-friend: A logical-based AI Agent System Chat-bot for Emotional Well-being and Mental Health - Mauricio Javier Osorio Galindo, Luis Angel Montiel Moreno, David Rojas-Velázquez, and Juan Carlos Nieves Sanchez

7:00-7:20 Wolves in Sheep’s Clothing: Using Shill Agents to Misdirect Multi-Robot Teams - Ronald Arkin, Michael Pettinati, and Akshay Krishnan

7:20-8:00 Discussion & Closing remarks

Invited Talk

Hans van Ditmarsch

LORIA CNRS, France

Lies, deception, and computation

Abstract

In a dynamic modal logic a 'lie that p' (where p is a some proposition) is an action, that is interpreted as a state transformer relative to the proposition p. These states are pointed Kripke models encoding the uncertainty of agents about their beliefs, and their transformation results in updated beliefs. Lies can be about factual propositions but also about the beliefs of other agents. Deception can be given meaning in terms of protocols consisting of sequences of such actions, in view of realizing an epistemic goal. Agents can have many different roles. Two speaker perspectives are: (obs) an outside observer who is lying to an agent that is modelled in the system, and (ag) an agent who is lying to another agent, and where both are modelled in the system. Three addressee perspectives are: the *credulous* agent who believes everything that it is told (even at the price of inconsistency), the *skeptical* agent who only believes what it is told if that is consistent with its current beliefs, and the *belief revising* agent who believes everything that it is told by consistently revising its current, possibly conflicting, beliefs. Then again there may be non-addressed agents perceiving lies and deception yet different from the addressee.

Lying may be costly. Not only in terms of delayed response time in psychological experiments, on which we will not speak, but also in terms of computational complexity of performing certain tasks while lying or deceiving, or while taking into account that one might be lied to. We do not know of hard results in this area but there seem to be many interesting open questions. Results on computational complexity seem to be particularly relevant for AI. Issues are not merely the possibility of one lie among many truths coming out of the mouth of a single agent (we recall Ulam games), but also the presence of one unreliable agent in the presence of many trustworthy agents (typical in security protocol settings). How to detect lies or liars, and how costly is that? There is ample room for special case studies and benchmarks, such as lying in gossip protocols.


Background:

Hans van Ditmarsch, Petra Hendriks, Rineke Verbrugge:

Editors' Review and Introduction: Lying in Logic, Language, and Cognition. topiCS 12(2): 466-484 (2020)


Discussion Panel

How do we hold deceptive machines accountable?

Alistair Isaac is a senior lecturer in Philosophy at the University of Edinburgh. His research primarily concerns mental representation and measurement in the cognitive sciences. In collaboration with Will Bridewell, he has defended the claim that robots will need the capacity to deceive if they are to participate in ordinary human social interactions


Heather M Roff is a Senior Research Analyst at Johns Hopkins University Applied Physics Laboratory and an Associate Research Fellow at the Leverhulme Centre for the Future of Intelligence. She is also a Fellow in Foreign Policy at the Brookings Institution. Her research interests include the law, policy and ethics of emerging military technologies, such as autonomous weapons, artificial intelligence, robotics and cyber, as well as international security and human rights protection. She has published numerous peer-reviewed articles, authored the monograph Global Justice, Kant and the Responsibility to Protect, and written for various media and news outlets. She was a Senior Research Fellow at the University of Oxford in the Department of Politics & International Relations, a Research Scientist at DeepMind in the Ethics and Society team, a Special Government Expert for the US Department of Defense Innovation Board, as well as a Fellow at the New America Foundation on the National Cybersecurity Initiative and Future of War Project.

David Rojas-Velázquez is an independent researcher. David has a master's degree in computer science. His main area of research is the computational modeling of emotions. David's research interests span from intelligent agents, mental well-being and digital companions to classification algorithms, and even a little bit of deep-learning and soft computing. You can find his work on Google Scholar and on his Researchgate.