Call for Papers

1st International Workshop on Deceptive AI @ECAI2020

30 August 2020

Santiago de Compostela, Spain

Dates

Workshop: 30 August 2020 (online)

Submission Deadline: 25th May 2020

Notification Deadline: 18th June 2020

Camera-Ready Deadline: 15th July

Call for Papers

There is no dominant theory of deception. The literature on deception treats different aspects and components of deception separately, sometimes offering contradictory evidence and opinions on these components. Emerging AI techniques offer an exciting and novel opportunity to expand our understanding of deception from a computational perspective. However, the design, modelling and engineering of deceptive machines is not trivial from either conceptual, engineering, scientific, or ethical perspectives. The aim of DeceptECAI is to bring together people from academia, industry and policy-making in order to discuss and disseminate the current and future threats, risks, and even benefits of designing deceptive AI. The workshop proposes a multidisciplinary approach (Computer Science, Psychology, Sociology, Philosophy & Ethics, Military Studies, Law etc.) to discuss the following aspects of deceptive AI:

1) Behaviour - What type of machine behaviour should be considered deceptive? How do we study deceptive behaviour in machines as opposed to humans?

2) Reasoning - What kind of reasoning mechanisms lie behind deceptive behaviour? Also, what type of reasoning mechanisms are more prone to deception?

3) Cognition - How does cognition affect deception and how does deception affect cognition? Also, what function, if any, do agent cognitive architectures play in deception?

4) AI & Society - How does the ability of machines to deceive influence society? What kinds of measures do we need to take in order to neutralise or mitigate the negative effects of deceptive AI?

5) Engineering Principles - How should we engineer autonomous agents such that we are able to know why and when they deceive? Also, why should or shouldn’t we engineer or model deceptive machines?

We invite submissions related to deception in the following topic areas, but not restricted to these:

  • Deceptive Machines

  • Multi-Agent Systems and Agent Based Models

  • Trust and Security in AI

  • Machine Behaviour

  • Argumentation

  • Machine Learning

  • Explainable AI - XAI

  • Human-Computer (Agent) Interaction - HCI/HAI

  • Philosophical, Psychological, and Sociological aspects

  • Ethical, Moral, Political, Economical, and Legal aspects

  • Storytelling and Narration in AI

  • Computational Social Science

  • Applications related to deceptive AI

Submissions & Format


  1. Long papers (12 pages + 1 page references): Long papers should present original research work and be no longer than thirteen pages in total: twelve pages for the main text of the paper (including all figures but excluding references), and one additional page for references.

  2. Short papers (7 pages + 1 page references): Short papers may report on works in progress. Short paper submissions should be no longer than eight pages in total: seven pages for the main text of the paper (including all figures but excluding references), and one additional page for references.

  3. Position papers regarding potential research challenges are also welcomed in either long or short paper format.


Submissions are NOT anonymous. The names and affiliations of the authors should be stated in the manuscript.

All papers should be formatted following the Springer Lecture Notes in Computer Science LNCS/LNAI style and submitted through the EasyChair link below.


Submissions Closed!

Publication

DeceptECAI2020 Proceedings will be submitted to Springer CCIS for publication. CCIS is indexed by various A&I services, e.g., Scopus, EI-Compendex, DBLP, etc.

We also plan a Special Issue on the topic of "Deceptive AI" in a highly-ranked AI journal. Authors of selected papers will be invited to submit extended versions of their papers to this special issue.

Contact

Any questions about the submissions should be addressed to: stefan.sarkadi@kcl.ac.uk