Keynote Talks

Michael Felderer, University of Innsbruck, Austria

Michael is an associate professor at the Department of Computer Science at the University of Innsbruck, Austria, and a guest professor at the Department of Software Engineering at the Blekinge Institute of Technology, Sweden. His fields of expertise and interest include software quality, testing, software and security processes, risk management, data-driven engineering, software analytics and measurement, requirements engineering, model-based software engineering, software engineering education, and empirical research methodology in software and security engineering.

Title of the Talk:

Natural Language Processing in System Verification: Current Approaches and Future Directions

Abstract:

Natural language test cases are essential for verification of software systems, products, or services as the intended behavior can neither be fully formalized nor thoroughly be tested automatically. Furthermore, comprehensive natural language system specifications or norms are applied to derive test cases and the number and complexity of test scenarios are ever-increasing. Therefore natural language processes play a central role in keeping system verification effective and efficient. However, the potential of natural language processing for system verification has not been fully exploited so far. In this talk, we first give an overview of the current state of natural language processing in system verification. Then, we present our recent results on the application of natural language processing for detecting dependencies between system test cases, which enables an enormous increase in the efficiency of system testing. Finally, we sketch future directions of research on the application of natural language processing in system verification, especially also with respect to AI-enabled systems in regulated environments.

Fabiano Dalpiaz, Utrecht University, Netherlands

Fabiano is an assistant professor in the Department of Information and Computing Sciences at Utrecht University in the Netherlands. His research is partially funded by research funding agencies; for example, the PACAS project (Participatory Architectural Change Management in ATM Systems) was funded by the European Commission in 2016-2018. He was the organisation chair for the REFSQ 2018 conference, He is on the editorial board and social media chair of the Requirements Engineering Journal, and on the steering committee of the AIRE workshop. He regularly serve on the program committee of international conferences such as RE, CAiSE, AAMAS, REFSQ, ER, MODELS.

Title of the Talk:

NLP for Requirements Engineering: Good Enough?

Abstract:

Requirements Engineering is a natural language heavy phase of Software Engineering. The prevalent notation for expressing requirements is still text; consequently, the research community proposed numerous NLP-powered tools for analyzing requirements-relevant information. Building on the experience gained within the Requirements Engineering Lab (RE-Lab), I am going to discuss the notion of quality when it comes to NLP tools for requirements engineering (NLP4RE tools). How to measure quality? How have we measured quality so far? What does “good enough” mean? Who should determine whether an NLP4RE tool performs well? While answering these questions, I will focus on the NLP tools for user stories that the members of the RE-Lab have developed, while keeping a keen eye on tools proposed by other research groups. The ultimate goal of the talk is to provide a “where do we stand, where do we go” perspective on NLP4RE research, its results, and its impact.