These are the instructions for the groupwork at TRUST-AI 2025. The groupwork will be conducted right after lunch on Day 1 and run for 1,5 hours. Participants break out in groups of 6-8 persons after inital plenary instructions.
Step 1. Welcome: Present yourselves and your research interests to each other. Select group leader and presenter (can be the same person) – 10 minutes
Step 2. Scoping: Select one of the seven suggested scopes for your groupwork (see scopes listed below). Aim for a scope that matches group members’ interests – 10 minutes
Step 3. Individual brainstorming: Key future research challenges for trustworthy AI – within chosen scope – 5 minutes
Step 4. Sharing and prioritizing: Share and discuss identified challenges. Prioritize top three challenges for your group – 35 minutes
Step 5. Adressing: Identify one or more research projects to address prioritized challenges – 15 minutes
Presentation (day 2): Present your outcomes to the plenary – (a) scope, (b) top 3 prioritized challenges, (c) research project to address - 2-3 minutes pr. group. Presentation (1-2 pages) can be sent to the workshop email address trustai2025@easychair.org
Suggested scopes:
Trustworthy AI: Broad scope of entire field. Default scope if none of the others fit.
Human-centered Trustworthy AI: Incorporating user perspectives and values in development of trustworthy AI. Adressing conflicting priorities between different stakeholders.
Technological Advancements for Trustworthy AI: Emerging technologies to support the development, deployment, and verification of trustworthy AI.
Assessment of Trustworthy AI: Methods, tools, and best practices for trustworthiness assessment and validation.
The Ethical and Legal Basis for Trustworthy AI: The ethical and legal foundations of trustworthy AI, its principles and values, and how to comply.
Risk Management for Trustworthy AI: Frameworks and approaches for identifying, assessing, and mitigating risks associated with AI trustworthiness.
Trustworthiness Optimization throughout the AI Lifecycle: Ensuring and maintaining trustworthiness throughout the phases of AI development and deployment.