The workshop invites contributions related, but not limited, to the following topics:
Foundations of RE for Responsible AI: Frameworks and methodologies for capturing AI-specific responsible requirements; Ethical and fairness considerations in requirements elicitation for AI systems; Requirements for data quality and challenges of bias mitigation in AI training.
Socio-Technical Aspects of Responsible AI: Integrating user needs, values, and preferences into AI requirements; Challenges in balancing human and machine decision-making in AI systems; Interactive tools for RE in AI; Requirements for adaptive and autonomous AI systems.
Trustworthiness and Governance for Responsible AI: RE for AI safety, robustness, and reliability; Regulatory compliance and legal aspects in AI Transparency and accountability requirements; Managing risks and uncertainty in AI requirements; Sustainable AI and its impact on RE.
Interdisciplinary Perspectives on Responsible AI: Stakeholder engagement and participatory design for AI requirements; Collaboration between requirements engineers, ethicists, and domain experts for incorporating cognitive, psychological, social, and cultural insights into AI requirements.
We welcome the following types of submissions:
Empirical studies providing research results (up to 6 pages + 1 for references).
Experience reports providing insights on existing RE practices or novel approaches implemented in industry (up to 6 pages + 1 for references).
Design research providing solutions with an early validation (up to 6 pages + 1 for references).
Extended abstracts for posters/lightning talks (2–3 pages). This category includes problem statements explaining relevant industry problems and Vision statements describing explorative ideas for solutions.