Workshop 1
Mapping the Trustworthy AI Landscape
Mapping the Trustworthy AI Landscape
When: 3rd March 2026 11:30 - 16:00
Where: Pioneer Center for AI, Øster Voldgade 3, 1350 Copenhagen
Focus: Establish baseline understanding of trustworthy AI research across Denmark and Nordic countries, leading to a living repository of Trustworthy AI expertise in Denmark.
Purpose: Knowledge sharing and initial networking. Participants will present 10-minute lightning talks on their trustworthy AI research, creating a comprehensive map of expertise. Breakout sessions will identify common challenges and complementary strengths across different dimensions of trustworthiness.
Format: 1-day workshop with research presentations, poster sessions, and structured networking activities.
Outcome: A living repository of Trustworthy AI expertise in Denmark.
Preliminary Program:
11.30-12.30 Lunch and welcome (P1, Øster Voldgade 3, 7, 1350 Indre By)
12.30-14.00 Lightning talks (Natural History Museum, Øster Voldgade 5, 7, 1350 Indre By)
14.00-14.15 Break (coffee served at P1)
14.15-15.15 Break-out discussions (P1)
15.15-15.30 Break
15.30-16.00 Wrap up break-out discussions (Natural History Museum)
Lightning Talks [slides]:
Rune Nyrup, Aarhus University, Trustworthy AI: Four challenges from ethics and philosophy
Nirupam Gupta, University of Copenhagen, Byzantine-Generals in the Realm of Machine Learning
Boris Düdder, University of Copenhagen, Assessing Trustworthy AI in Healthcare
Lenka Tetková, DTU, Representational alignment and concept-based explainability: towards trustworthy latent spaces
Andres Masegosa, Aalborg University, Probabilistic Geospatial Machine Learning as a Blueprint for Trustworthy AI in High-Stakes Domains
Georgina Vickery, DTU, Challenges for AI in marine science
Tess Thorsen, Independent researcher and consultant, Human Rights informed approaches to Trustworthy AI
Mattes Ruckdeschel, ITU, Who owns the AI future?
Theresia Veronika Rampisela, University of Copenhagen, Can we trust recommender system fairness evaluation?
Breakout Discussions Summary
The participants broke into groups for structured discussions around three themes: the Danish Trustworthy AI landscape, definitions of trustworthiness, and challenges and future directions.
The Danish TrAI Landscape. Participants identified a rich and distributed ecosystem of trustworthy AI expertise across Denmark. Key academic groups were mapped, and industry and public-sector engagement were also highlighted.
Definitions and Reference Points. A central theme was the difficulty of arriving at a shared definition of trustworthy AI. Participants noted that each ML subfield tends to use its own definitions, and that trustworthiness spans both technical dimensions (explainability, robustness, uncertainty quantification, privacy) and social and organisational dimensions (accountability, fairness, user experience). Key reference points identified include the EU AI Act and HLEG guidelines, the FAIR data principles, and technical work on causal explainability and out-of-distribution detection. The discussion explored whether a closed, unified definition is desirable or whether a more open-textured framing, which allows researchers and practitioners to connect the concept to their existing tools and domains, might be more productive. Several participants emphasised that trust is built over time and in practice, and that tensions and trade-offs between trustworthiness dimensions often need to be negotiated in context.
Challenges and Future Directions. Key challenges identified include siloing (researchers optimising for a single metric in isolation), the difficulty of measurement and benchmarking across multiple trustworthiness dimensions simultaneously, the context-sensitivity of fairness, and a deeper underlying problem: the lack of a comprehensive theory of machine learning, which participants suggested may lie at the core of many trustworthiness challenges. Promising future directions discussed include large-scale, multi-dimensional trustworthiness benchmarking, more abstract and transferable approaches to fairness evaluation, and stronger connections between implementation science and in-practice data science.