A host of epistemological concepts have been proposed and adopted to underpin responsible and trustworthy AI, such as explainability [Arrieta et al., 2020], alignment [Christian, 2021], and responsibility [Vallor, 2023]. While a pluralism of approaches may be stimulating, it may also confuse decision makers at this crucial time, in which first legislations of AI are drawn up around the globe and companies seek to implement responsible AI practices. Recent legislative texts and political declarations like the EU AI Act, the UK Bletchley Park declaration and the US Presidential Executive Order all have language around trustworthy and responsible AI. However, it is less clear which underlying guiding concepts decision makers can and should turn to to achieve trustworthy and responsible AI in practice.
This workshop seeks positions that examine underpinning concepts critically in a transdisciplinary manner, and explores their adoption in texts such as legislative frameworks and practice oriented guidelines. The term guiding concepts can be understood widely for the purposes of the workshop, for instance in terms of epistemological, moral, empirical, philosophical or practical orientations, guidelines, and reasoning systems that have been, or could be drawn upon to underpin notions of trustworthy and responsible AI. The goal of the workshop is to map the landscape of such guiding concepts to enlighten the communities that are developing (e.g., researchers) and adopting (e.g., policy makers and practitioners) such concepts to drive responsible AI.
We invite positions ranging from practical examples to academic abstracts including, but not limited to the following topics:
Critical examinations of guiding (e.g., epistemological) concepts for responsible AI
Explorations of adoption of concepts in legal or practical texts such as laws, regulations, and design guidelines
Discussion on fusing diverse disciplines to understand responsible AI
Practical case studies and examples of responsible AI deployment and use of existing or planned systems (these can be extensions of what you or your organization have already built).
Examples of organisational structure and role responsibility underpinning responsible AI adoption, including the level of accountability based on roles, e.g., CTO, researcher, or data governance team, or processes that are in place, e.g., seeking ethical review.
By attending the workshop practitioners will gain a broad perspective on making trustworthy and responsible AI actionable in your organisation.
To register your interest in attending the workshop, please feel in this form.
We look forward to receiving your submission.
is Professor of Human-Computer Interaction (HCI) at the School of Computer Science, University of Nottingham (UoN), UK, and Research Director of the UKRI Trustworthy Autonomous Systems (TAS) Hub and the Responsible AI UK (RAI UK) a new national programme on Responsible and Trustworthy AI. His research in Human-AI Interaction, which combines Artificial Intelligence (AI) and Human-Computer Interaction, takes a human-centred view to understand adoption and embedding of AI-infused technologies into everyday life and work. [more]
is an Assistant Professor at the Eindhoven University of Technology in the Department of Industrial Design, with a background in philosophy, digital arts, and HCI. Her research concerns morally relevant interactions with various agents like robots or chatbots. Her recent work explores how we can explore our moral self-identity through conversations with digital entities, e.g., via acting compassionately towards a chatbot. [more]