BEWARE-22

Joint Workshop @ AIxIA 2022, December 2, 2022, University of Udine, Udine, Italy

Organizers of the BRIO Workshop (Bias, Risk and Opacity in AI), the ME&E-LP Workshop (Machine Ethics & Explainability - the Role of Logic Programming), and the AWARE AI Workshop (Ethics and AI, a two-way relationship) have joined efforts and created BEWARE, a forum where to discuss ideas on the emerging ethical aspects of AI, with a focus on Bias, Risk, Explainability and the role of Logic and Logic Programming. BEWARE is co-located with the AIxIA 2022 conference to be held in Udine from the 28th of November to the 2nd of December.

Latest News

Aims and Scope

Current AI applications do not guarantee objectivity and are riddled with biases and legal difficulties. AI systems need to perform safely, but problems of opacity, bias and risk are pressing. Definitional and foundational issues about what kinds of bias and risks are involved in opaque AI technologies are still very much open. Moreover, AI is challenging Ethics and brings the need to rethink the basis of Ethics.

In this context, it is natural to look for theories, tools and technologies to address the problem of automatically detecting biases and implementing ethical decision-making. Logic, Logic Programming and formal ontologies have great potential in this area of research, as logic rules are easily comprehensible by humans and favour the representation of causality, which is a crucial aspect of ethical decision-making. Nonetheless, their expressivity and transparency need to be integrated within conceptual taxonomies and socio-economic analyses that place AI technologies in their broader context of application and determine their overall impact.

This workshop addresses issues of logical, ethical and epistemological nature in AI through the use of interdisciplinary approaches. We aim to bring together researchers in AI, philosophy, ethics, epistemology, social science, etc., to promote collaborations and enhance discussions towards the development of trustworthy AI methods and solutions that users and stakeholders consider technologically reliable and socially acceptable.

Sponsors

Important Dates

The header was generated using Stable Diffusion, a  model that generates images from text. The text prompt was "An abstract purple wallpaper of robots holding danger signs". Read its disclaimer about Biases on its webpage!