Programme

The schedule below is tentative


08:45 - Doors open

09:15 - Opening session

09:30 - Toon Calders (University of Antwerp)

How to be fair? A study of label and selection bias
It is widely accepted that biased data leads to biased and thus potentially unfair models. Therefore, several measures for bias in data and model predictions have been proposed, as well as bias mitigation techniques whose aim is to learn models that are fair by design. Despite the myriad of mitigation techniques developed in the past decade, however, it is still poorly understood under what circumstances which methods work. We propose to address this problem by establishing relationships between the type of bias and the effectiveness of a mitigation technique, where we categorize the mitigation techniques by the bias measure they optimize. We illustrate this principle for label and selection bias on the one hand, and demographic parity and "We're All Equal" on the other hand. Our theoretical analysis allows to explain observations by other researchers regarding the accuracy-tradeoff, and we also show that there are situations where minimizing fairness measures does not result in the fairest possible distribution.

10:00 - Katrien Verbert (KU Leuven)

Human-centred Explainable AI: bridging the gap between explainable AI and real-world impact
Despite the rich set of XAI algorithms and off-the-shelf toolkits for AI developers, successful examples of XAI are still relatively scarce in real-world AI applications. Prominent researchers have highlighted the disconnect between technical XAI approaches and supporting users’ end goals in usage contexts and the need for user-centered design efforts that fill this gap.  This work is increasingly researched under the umbrella of human-centred XAI, where tools are designed together with stakeholders, ensuring broader understanding and participation. These human-centred XAI approaches are gaining increased research attention to enable the adoption of AI in real-life settings  In this talk, I will present our work on human-centered XAI methods that are tailored to the needs of non-expert users in AI in different application areas, including healthcare, human resources and learning analytics. In addition, I will present the results of several user studies that investigate how such explanations interact with different personal characteristics, such as expertise, need for cognition and visual working memory. 

10:30 - David Martens (University of Antwerp)

What If? Explaining AI Decisions with Counterfactual Explanations
The inability of many “black box” prediction models to explain the decisions made, have been widely acknowledged. Interestingly, the solution turns out to be the introduction of yet more AI algorithms that explain the decisions made by complex AI models. Explaining the predictions of such models has become an important ethical component and has gained increased attention of the AI research community and even legislator, resulting in a new field termed “Explainable AI” (XAI). Counterfactual (CF) explanations has emerged as an important paradigm in this field, and provides evidence on how a (1) prediction model came to a (2) decision for a (3) specific data instance. In this talk, I’ll first provide an introduction to the counterfactual explanation and compare it to other popular XAI approaches, such as SHAP. Next, some counterfactual generating techniques and example applications are discussed, demonstrating the value in a range of areas. Finally, the need for storytelling in XAI is highlighted, where the combination of LLMs with XAI is introduced.

11:00 - Coffee break

11:15 - Felicity Reddel (The Future Society)

Proactive Strategies for AI Governance
In an era where AI is evolving at an unprecedented rate, the challenge of governing general-purpose AI and foundation models becomes increasingly complex. This talk delves into the intricacies of creating governance frameworks that are not only robust but also flexible enough to remain relevant in the rapidly changing AI landscape. We explore a risk-based approach that addresses both downstream and upstream challenges. The conversation revolves around integrating principles of safety, fairness, and compliance into the design phase of AI systems, advocating for a proactive rather than reactive approach to AI governance. This 'by design' approach proves more efficient and effective, reducing potential pitfalls and ensuring that AI systems are better aligned with societal values and ethical standards.

11:45 - Jelle Hoedemaekers (Agoria)

The AI Act - An Industry Perspective
In the presentation we explore the European AI Act from the Belgian technology industry's standpoint, emphasizing how this landmark regulation fosters innovation, ethical AI development, and trust in AI technologies. Alongside these benefits, we critically examine the challenges faced by the industry, including compliance costs, alignment with existing systems, and the impact on innovation and global competitiveness. Now that we have a political agreement, who must do what when to be compliant? We will end by looking at the next steps and what industry and academia can already do towards compliance.

12:15 - Lunch break

13:30 - Panel discussion with all speakers

14:30 - End of symposium



15:30 - Public PhD Defense of Maarten Buyl (separate registration)

17:30 - Expected end of defense