Title: Argumentation for Trustworthy Automated Decision Making: Explanations and Multi-Policy Conflicts Resolution
Title: Argumentation for Trustworthy Automated Decision Making: Explanations and Multi-Policy Conflicts Resolution
Abstract: This lecture introduces computational argumentation as a powerful paradigm for building trustworthy Automated Decision-Making Systems, vital for ensuring transparency and accountability in AI applications with social impact. We will delve into the Gorgias system implementing a preference-based argumentation framework called Logic Programming with Priorities (LPP, Kakas and Moraitis, 2003), which empowers the no-code symbolic AI app development rAIson platform, demonstrating its capabilities in two key areas. First, we explore how its argumentative reasoning results can be exploited to provide human-readable explanations that are Attributive, Contrastive, and Actionable. Second, we present how to use this framework for resolving conflicts arising in communities of multiple stakeholders with private policies. This approach defines an arbitration meta-policy for prioritizing stakeholders, avoiding the high complexity of resolving all possible competing option conflicts directly. This is highly relevant for scenarios like medical data access legislation, energy management in smart buildings with competing user, manager, and safety policies, and data sharing agreements. The participants will have the possibility for a hands-on experience with the rAIson platform and model and implement their own application ideas.