The REACH-AI’26 Workshop is held in conjunction with the 34th IEEE Requirements Engineering Conference
The goal of the Second International Workshop on Requirements Engineering for Accountable and Conscious Human-centered AI (REACH-AI 2026) is to continue and extend the interdisciplinary discourse of researchers and practitioners on AI impacts on society from ethical, responsible, and accountable viewpoints. REACH-AI 2026 will focus on key topics, such as identifying and addressing ethical considerations during the requirement development phase, integrating fairness, accountability, and inclusiveness into the system design, ensuring explainability to gain user trust through transparent documentation, interfaces, and feedback loops, and considering means for overcoming user cognitive biases and over-reliance on AI outcomes. This involves (a) providing transparency, fairness, and accountability of the solutions; (b) overcoming concerns related to biases in AI models, and explainability of AI decisions; (c) data privacy and security; (d) robust mechanisms for auditing and monitoring systems' post-deployment to address unintended consequences or evolving risks.
The workshop seeks to give a stage to research-oriented conversations around these topics, with an emphasis on approaches for requirements elicitation, analysis, specification, and evolution. The discussions will focus on establishing frameworks for (a) aligning AI capabilities with stakeholder values; (b) analyzing trade-offs between performance, explainability, and ethical constraints; and, (c) interdisciplinary collaborations for effectively managing and mitigating AI risks. Participants will gain actionable insights on embedding responsible AI principles into the development lifecycle of AI systems, fostering innovation and user trust.
Professor Didar Zowghi is a Senior Principal Research Scientist at CSIRO’s Data61 (the digital and data specialist arm of Australia’s national science agency). Since joining CSIRO in 2022, she has established and led pioneering research focused on the challenges and opportunities of embedding Diversity and Inclusion (D&I) into the development and deployment of Responsible Artificial Intelligence. She has led the development of ground-breaking framework and guidelines for operationalising D&I in AI which was incorporated into the Australian National Framework for the Assurance of AI in Government.
Professor Zowghi holds the title of Emeritus Professor at the University of Technology Sydney (UTS), where she served for 22 years as a full professor of software engineering (SE). Her academic research spans decades of contributions to SE, with a particular emphasis on Requirements Engineering. She is internationally recognised for her expertise in evidence-based research and human-centred design methodologies. She has supervised numerous Master’s and PhD students both in Australia and abroad. During her UTS tenure, she held many senior leadership roles, including Deputy Dean of the Graduate Research School, Director of the Centre for Human-Centred Technology Design, Head of Software Engineering, and Associate Dean for Research. Before academia, she worked in the software industry in the UK and Australia as a software engineer, analyst, consultant, and project manager.
Professor Zowghi has received several prestigious accolades, including the IEEE Lifetime Service Award (2019) for leadership in Requirements Engineering and the IEEE Computer Society Distinguished Educator Award (2022). She has served on editorial boards of many SE and RE journals for many years, currently Associate Editor os IEEE Software. She has published over 300 research articles in leading journals and conferences. Her collaborative work spans over co-authorship with over 150 researchers across 40+ countries. She has delivered numerous keynotes and industry seminars on SE, RE, Responsible AI , and Diversity & Inclusion in AI.