Ethics and Trust in Human-AI Collaboration: Socio-Technical Approaches
August 21st, 2023
Macao
It is increasingly acknowledged that AI needs to be used to augment human intelligence, rather than replacing it. It is reasonable and useful to automate some tasks, but most of the tasks will be tackled by combining the complementary capabilities of humans and machines.
This is the case for "classical" AI models like classifiers and predictors, that are aimed to support human decision making, especially in high-risk domains. Even the most recent AI advances, like those in generative AI, exploit the power of language or other kinds of content to allow AI systems to better interact with humans and to support their creativity.
However, for this collaboration to work well, special attention needs to be put in designing hybrid systems in a way that trust and ethics issues are addressed satisfactorily. Without trust, which requires assets such as misinformation detection, explainability, transparency, and fairness, humans will not fully exploit the available AI capabilities. We also need to address ethics issues, such as the possible deskilling and displacement of human decision makers and the risk of value misalignment. Trust and ethics need to be a central and integral aim of the design, development, and use of human-AI collaboration systems, just as decision quality. We need to have better decisions than humans or machines alone, while achieving trust and resolving ethical issues.
To achieve this, we can and should exploit knowledge of how humans make decisions and interact with others (humans or artificial agents). Thus cognitive theories or knowledge from other cognitive sciences are essential to achieve these goals.
This workshop aims to connect three main areas of study: Human-AI collaborative environments, Ethics and Trust, and Cognitive theories of the human mind. We envision an audience of scholars from at least these three disciplines, that can interact and exchange ideas and solutions.