The workshop invites contributions related, but not limited, to the following topics:
Foundations of RE for Responsible AI: Frameworks and methodologies for responsible AI requirements; ethical and fairness considerations in RE for AI systems; modeling requirements for explainable and accountable AI; requirements for data and challenges of bias in AI training.
Human-centered and Socio-Technical Aspects: Specifying user needs and values in AI requirements; balancing human and machine decision-making in AI systems; developing interactive tools for RE for AI; analyzing requirements for adaptive and autonomous AI systems.
Trustworthiness and Governance: RE for AI safety, robustness, and reliability; regulatory and legal aspects in AI transparency and accountability; risk and uncertainty in AI requirements.
Interdisciplinary Perspectives: Engaging stakeholders in participatory design including engineers, ethicists, and domain experts; incorporating cognitive, psychological, social, and cultural insights into AI requirements.
Case Studies and Emerging Trends: Lessons learned from RE for AI projects; domain-specific requirements for responsible AI; applications of RE to generative AI systems; interactive tools for RE for AI systems.
We welcome the following types of submissions:
Empirical studies providing research results (up to 6 pages + 1 for references).
Experience reports providing insights on existing RE practices or novel approaches implemented in industry (up to 6 pages + 1 for references).
Design research providing solutions with an early validation (up to 6 pages + 1 for references).
Extended abstracts for posters/lightning talks (2–3 pages). This category includes problem statements explaining relevant industry problems and Vision statements describing explorative ideas for solutions.