Barbara Rita Barricelli
Barbara Rita Barricelli is an Associate Professor at the Department of Information Engineering of Università degli Studi di Brescia (Italy). Her research interests include Human-Computer Interaction, Human Work Interaction Design, Socio-technical Design, End-User Development, Usability, and UX. She has a particular focus on the interaction design for digital twins. She serves as Vice-Chair of IFIP Working Groups 13.11/12.14 on Human-Centered Intelligent Systems and 13.6 on Human Work Interaction Design.
José Creissac Campos
José Creissac Campos is an Associate Professor in the Department of Informatics at the University of Minho (Portugal) and a research coordinator at INESC TEC's High-Assurance Software Laboratory (HASLab). He is a member of IFIP WG 2.7/13.4 on User Interface Engineering, which he chaired from 2016 to 2022, as well as WG 13.11/12.14 on Human-Centered Intelligent Systems. He has been regularly involved in ACM SIGCHI EICS, serving in various roles, including chairing the steering committee from 2020 to 2024, and has contributed to several other conference series over the years, such as INTERACT and IUI. His research sits at the intersection of Software Engineering and Human-Computer Interaction, with a strong focus on formal verification. Currently, his work explores how generative models can support the design and understanding of complex, safety-critical interactive systems.
Kris Luyten
Kris Luyten is full professor in Computer Science at Hasselt University, and deputy managing director of the research center Expertise Center for Digital Media – a Flanders Make core lab. He is a Principal Investigator for the Flanders AI Research Program. His research explores how to improve intelligibility for intelligent software systems, striving to simplify the interaction between humans and technology. Kris has contributed to the HCI field by investigating how intelligibility features can be integrated into, a.o., automation [7] and AI-driven applications [6, 27], aiming to enhance user awareness and control over system behaviors.
Sven Mayer
Sven Mayer is an assistant professor of computer science at LMU Munich (Germany). His research sits at the intersection between Human-Computer Interaction and Artificial Intelligence, where he focuses on the next generation of computing systems. He uses artificial intelligence to design, build, and evaluate future human-centered interfaces. In particular, he envisions enabling humans to outperform their performance in collaboration with the machine. He focuses on areas such as augmented and virtual reality, mobile scenarios, and robotics. He has served as a program committee member at numerous conferences, e.g., ACM CHI, and in various organizing committees, e.g., as General Chair for the International Conference on Hybrid Human-Artificial Intelligence (HHAI’23).
Philippe Palanque
Philippe Palanque is a professor of computer science at the University Toulouse 3 "Paul Sabatier" in Toulouse, France. Since the late 80s, he has been working on the development and application of formal description techniques for interactive systems. For more than 20 years, he has been working on automation and its integration in interactive systems [13]. For instance, he was involved in the research network HALA! (Higher Automation Levels in Aviation) funded by SESAR programme which targeted at building the future European air traffic management system. The main driver of Philippe’s research over the last 20 years has been to address in an even way Usability, Safety and Dependability [5] in order to build trustable safety critical interactive systems. As for conferences, he is a member of the program committee of conferences in these domains such as SAFECOMP 2023 (42nd Conference on Computer Safety, Reliability and Security), DSN 2014 (44th Conference on Dependable Systems and Networks), EICS 2023 (15th annual conference on Engineering Interactive Computing Systems).
Emanuele Panizzi
Emanuele Panizzi Emanuele Panizzi is an Associate Professor in Computer Science at Sapienza University of Rome, Italy. He directs a research team focusing on human-computer interaction, app design, gamification, and context-aware mobile interaction. In the two areas of smart parking and earthquake detection, his current study uses AI to recognise users' behaviour and context. Designing mobile user interfaces with implicit interaction and crowdsensing applications is the experimental component of this study. He served as the conference's program chair for Advanced Visual Interfaces AVI2022. He is currently serving as Associate Chair for ACM AutomotiveUI '23. He teaches HCI and software architecture. He has served as a consultant for major national and international corporations.
Lucio Davide Spano
Lucio Davide Spano is an Associate Professor at the Univerity of Cagliari since 2019. He is chair of the IFIP 2.7/13.4WG on User Interface Engineering since June 2022 and Delegate for the Research of the Extended Committee of SIGCHI-Italy. He has been a member of the Model-Based User Interface WG of the World Wide Web Consortium (W3C). He has been Programme Co-Chair for ACM Intelligent User Interfaces in 2020, and an associate editor for a special issue in ACM Transactions on Intelligent Interactive Systems. He is a member of the Senior Programme Committee of high-level international conferences in Human-Computer Interaction (e.g., IUI, INTERACT, EICS, NordiCHI). He is currently investigating the relationship between the logic reasoning style (inductive, abductive, deductive) in eXplainable AI (XAI) interfaces. He published results considering image, text [1] and temporal series [4] data types.
Simone Stumpf
Simone Stumpf is Professor of Responsible and Interactive AI in the School of Computing Science at the University of Glasgow, Scotland. She has a long-standing research focus on user interactions with AI systems. Her research includes self-management systems for people living with long-term conditions, developing teachable object recognisers for people who are blind or have low vision, and investigating AI fairness. Her work has contributed to shaping Explainable AI (XAI) through the Explanatory Debugging approach for interactive machine learning, providing design principles for enabling better human-computer interaction and investigating the effects of greater transparency. The prime aim of her work is to empower all users to use AI systems effectively.