BEWARE-23

Joint Workshop @ AIxIA 2023, Rome, Italy

The second edition of the BEWARE workshop is a forum where to discuss ideas on the emerging ethical aspects of AI, with a focus on Bias, Risk, Explainability and the role of Logic and Logic Programming. BEWARE is co-located with the AIxIA 2023 conference to be held in Rome from the 6the to the 9th of Novevember, 2023.

Important info for the proceedings

In order to have the proceedings published, all authors must format their papers according to the CEURART style by the December 15, 2023. Then, they'll have to send their (possibly revised) papers (PDF files only) to fabioaurelio.dasaro@univr.it together with a signed Author Agreement form (available here) or a surrogate agreement (see example here). It's crucial to adhere to the specific requirements for author agreement forms set by CEUR to avoid processing delays. Ensure that the PDF of your agreement form is physically signed, scanned (or photographed with your phone), and submitted with the OCR feature turned off. For detailed guidance on submission and agreement form requirements, please visit https://ceur-ws.org/HOWTOSUBMIT.html.

Latest News

Aims and Scope

Current AI applications do not guarantee objectivity and are riddled with biases and legal difficulties. AI systems need to perform safely, but problems of opacity, bias and risk are pressing. Definitional and foundational issues about what kinds of bias and risks are involved in opaque AI technologies are still very much open. Moreover, AI is challenging Ethics and brings the need to rethink the basis of Ethics.

In this context, it is natural to look for theories, tools and technologies to address the problem of automatically detecting biases and implementing ethical decision-making. Logic, Computational Logic and formal ontologies have great potential in this area of research, as logic rules are easily comprehensible by humans and favour the representation of causality, which is a crucial aspect of ethical decision-making. Nonetheless, their expressivity and transparency need to be integrated within conceptual taxonomies and socio-economic analyses that place AI technologies in their broader context of application and determine their overall impact.

This workshop addresses issues of logical, ethical and epistemological nature in AI through the use of interdisciplinary approaches. We aim to bring together researchers in AI, philosophy, ethics, epistemology, social science, etc., to promote collaborations and enhance discussions towards the development of trustworthy AI methods and solutions that users and stakeholders consider technologically reliable and socially acceptable.

Co-organized by

Important Dates

Workshop Date: 6th November 2023

Submission deadline: 17 September 2023 (extended)

Notification: 6 October 2023

Camera-ready: 20 October 2023

In collaboration with:

Sponsored by

The header was generated using Stable Diffusion, a  model that generates images from text. The text prompt was "BEWARE". Read its disclaimer about Biases on its webpage!