In the TRUST-AI groupwork, five groups explored future research challenges of trustworthy AI. The groups included participants from various disciplines and with a range of research interests related to trustworthy AI. During a 1,5 hour process – out of doors and in pleasant weather – the groups selected their preferred scope of exploration, identified key future research challenges, and proposed potential research projects or initiatives in response to the challenges. (Detailed description of the groupwork instructions below)
The groupwork outcomes were presented in a dedicated session at the second day of the workshop. In the following we briefly summarize the theme of each group. For continued exchange and discussion, workshop participants may get in touch with the contact persons for each groupwork.
The group discussed the challenges of bridging the gap between technical and legal perspectives in AI development and governance. In particular,
· How to establish common ground between technologists and legal experts when discussing trustworthy AI?
· How to address regulatory compliance as a technical, organizational, and political challenge – not just a legal one?
· How to establish needed AI literacy for individuals involved in or affected by regulatory frameworks?
The group concluded on the need to establish a shared language of trustworthy AI that allows all stakeholders—developers, users, regulators—to communicate effectively.
This group explored how to make assessment and risk management of trustworthy AI practical and actionable in real-world settings. Key identified research challenges:
· How to construct a software for trustworthy AI assessment that can actually be deployed and preferred by the users?
· For a risk management system, how to model how risk propagate in AI systems, and how to present this for the user?
· How should LLM technologies be used in an AI system to assess trustworthiness, given the inherent problem of hallucinations?
To address the challenges, the group proposed to establish a framework for frequent trustworthiness assessment of the AI system. The group also discussed including an iterative procedure where feedback from users is explicitly used to improve the AI system, as well as specific measures for verification and explanations.
With backgrounds in psychology and information security, this group emphasized the human-centred dimension of trust in AI systems. The group noted that current approaches may neglect the user’s perspective and advocated for a principled framework that explicitly incorporates the human context in trust assessment.
In response to this challenge, the group proposed the future research project “The Kaleidoscope of Trustworthy AI”. Here, to share and build competency on human-centred assessment, real-world examples are collected to illustrate the application of the seven pillars of trustworthy AI in human-centred scenarios. The examples should demonstrate the application of key principles within the pillars of trustworthy AI and, hence, serve as concrete showcases of how these manifest across diverse contexts.
This group approached trustworthy AI assessment from a holistic multidisciplinary perspective. Specifically, they identified three key directions for assessing trustworthiness of AI that focus on governance-, understanding-, and reliability-oriented features. The group further highlighted the need for a deeper understanding of companies’ uptake and use of trustworthy AI assessment, as well as the need for practical experimentation with approaches to such assessment.
In response to these needs, the group proposed a research sandbox for trustworthy AI assessment— a controlled academic environment that simulates end-to-end trust assessments of (virtual) companies. The objective of such a sandbox would be to help explore how companies might adopt and operationalize trustworthy AI assessment frameworks in practice.
This group examined the risks of anthropomorphizing AI systems – specifically, trust and (over)reliance issues associated with attributing human-like qualities to AI. Potentially with a less human-like framing of AI, users and policymakers can develop more calibrated, responsible, and trustworthy relationships with AI technologies. However, such reduction in human-like framing may potentially reduce user interest and – for some use cases – be at odds with fulfilling the AI function.
The group proposed a research project on de-anthropomorphizing AI to maintain realistic expectations of its capabilities, with use cases in domains such as healthcare, mental health, security, migration.
Together, the discussions of the five groups outlined a forward-looking research agenda aimed at translating the principles of trustworthy AI into shared methods, governance mechanisms, and responsible practices across disciplines and sectors. We look forward to further discussions of the presented research challenges.
Groupwork instructions
These are the instructions for the groupwork at TRUST-AI 2025. The groupwork will be conducted right after lunch on Day 1 and run for 1,5 hours. Participants break out in groups of 6-8 persons after inital plenary instructions.
Step 1. Welcome: Present yourselves and your research interests to each other. Select group leader and presenter (can be the same person) – 10 minutes
Step 2. Scoping: Select one of the seven suggested scopes for your groupwork (see scopes listed below). Aim for a scope that matches group members’ interests – 10 minutes
Step 3. Individual brainstorming: Key future research challenges for trustworthy AI – within chosen scope – 5 minutes
Step 4. Sharing and prioritizing: Share and discuss identified challenges. Prioritize top three challenges for your group – 35 minutes
Step 5. Adressing: Identify one or more research projects to address prioritized challenges – 15 minutes
Presentation (day 2): Present your outcomes to the plenary – (a) scope, (b) top 3 prioritized challenges, (c) research project to address - 2-3 minutes pr. group. Presentation (1-2 pages) can be sent to the workshop email address trustai2025@easychair.org
--
Suggested scopes for the groupwork:
Trustworthy AI: Broad scope of entire field. Default scope if none of the others fit.
Human-centered Trustworthy AI: Incorporating user perspectives and values in development of trustworthy AI. Adressing conflicting priorities between different stakeholders.
Technological Advancements for Trustworthy AI: Emerging technologies to support the development, deployment, and verification of trustworthy AI.
Assessment of Trustworthy AI: Methods, tools, and best practices for trustworthiness assessment and validation.
The Ethical and Legal Basis for Trustworthy AI: The ethical and legal foundations of trustworthy AI, its principles and values, and how to comply.
Risk Management for Trustworthy AI: Frameworks and approaches for identifying, assessing, and mitigating risks associated with AI trustworthiness.
Trustworthiness Optimization throughout the AI Lifecycle: Ensuring and maintaining trustworthiness throughout the phases of AI development and deployment.