RE4AI 2026
Co-located with REFSQ 2026
March 23-26, Poznań, Poland.
Co-located with REFSQ 2026
March 23-26, Poznań, Poland.
These days, AI is used as a component in software and hardware systems, from everyday objects, such as cars, household appliances, wearable devices, to unmanned military vehicles and arms, to banking and apps. Moreover, AI techniques such as Machine Learning have been used now for several years by big tech players, as well as by startups that, for instance, provide business intelligence services to insurance companies. The rise of generative AI now means that large language models and other foundation models can and are being used as part of complex software systems, either through open-source or open-weight options, or via services and APIs. This use opens up AI integration into software in varied domains and its adoption is increasing at the individual as well as at the company level. Such generative models allow for the development of agentic AI systems, complex systems that can benefit from explicit requirements consideration as part of their design.
Since several years ago (e.g. 2015 open letter and a document about research priorities), AI researchers have manifested their worries and recommendations for the responsible use of data, employment of discrimination-free algorithms, alignment of AI-based systems and technologies with human values and transparency. Their main aim with such claims is to create awareness in policy makers. As a response, in 2018, Europe has defined three pillars, which state that regarding AI systems, European countries should: be ahead in public and private technological development; support education to prepare for emerging social-economical changes; and assure an appropriate ethical and legal framework. In 2019, Europe further developed its approach in what is known as Trustworthy AI manifesto, which states that AI systems should be lawful, ethical and robust. Following this work, the EU AI Act has been adopted in May 2024. Such an act governs how AI can be used as part of software systems, depending on the criticality of the domain. Guidelines for a responsible use of AI technologies are proposed, as for instance those regarding the use of generative AI in research.
It is hard to imagine that AI systems will achieve these aforementioned attributes without accounting for a strong emphasis on capturing and maintaining “the right” requirements, and making sure that the system is validated to properly meet such requirements. As the RE community is aware, this entails a myriad of methods and tools covering all RE activities, including requirements analysis, documentation and evolution. Nevertheless, many AI systems are today developed without much focus on the early development stages. In other words, much focus is put on combining different algorithms and heuristics, without however a more abstract view on what the system should deliver. As a result of the lack of RE support, the resulting system may be far from what is intended, leading to failing projects and systems that go rogue, which may ultimately cause harm to human individuals and society.
The main goals of the RE4AI Workshop may be summarized as follows:
continuing awareness in the RE community about the importance of RE in realizing Trustworthy AI systems;
allowing for state-of-the-art RE for AI research to be presented and disseminated;
bringing in the same room people using AI in practice with RE experts inacademia to discuss pressing issues, such as how RE can contribute to prevent AI systems to fail or to go rogue;
supporting new ideas and inspiration in adapting RE research to new AI technologies, e.g., the prevalence of foundation models or the rise of agentic AI;
motivating cross fertilization between AI and RE works.