13th Joint Workshop on Interfaces and Human Decision Making for Recommender Systems - IntRS'26
Minneapolis, Minnesota, USA, 27th September - October 2nd, 2026
Paper submission deadline: July 20, 2026
Author notification: August 14, 2026
Camera-ready version: August 28, 2026
Submit your paper here.
Topics of interest include, but are not limited to:
User Interfaces
Visual interfaces
Explanation interfaces
Ethical issues (Fairness and Biases) in explainable interfaces
Collaborative multi-user interfaces (e.g., for group decision-making)
Spoken and natural language interfaces
Trust-aware interfaces
Social interfaces
Context-aware interfaces
Ubiquitous and mobile interfaces
Conversational interfaces
Example- and demonstration-based interfaces
New approaches to designing interfaces for recommender systems
UIs counteracting decision manipulation
User interfaces and cognitive overload
Psychological aspects of privacy-aware recommendation interfaces
Generative AI for Recommender Systems interfaces
Interaction, user modeling, and decision-making
Cognitive Modeling for Recommender Systems
Symbiotic recommender systems
Explainability of decision-making models
User-adaptive XAI systems
Controllability, transparency, and scrutability of decision-making models
Decision theories and biases (e.g., priming, framing, and decoy effects)
Detection and avoidance of decision biases (e.g., in item presentations)
Preference elicitation and construction (e.g., eye tracking for automated preference elicitation)
The role of emotions in recommender systems (e.g., emotion-aware recommendation)
Trust inspiring UIs (e.g., explanation-aware RSs)
Argumentation & persuasive recommendation (e.g., aspects of nudging in RSs)
Cultural differences (e.g., culture-aware recommendation)
Mechanisms for effective group decision-making (e.g., group recommendation heuristics)
Decision theories for effective group decision-making (e.g., hidden profile management)
Voting Advice Applications
Human-LLMs interaction, prompting, and chaining
Evaluation
User-centric evaluation for Symbiotic AI interfaces
Application descriptions and related case studies in Human-Centered Recommender Systems
Benchmarking platforms for Human-Centered Recommender Systems
Empirical studies and evaluations of new interfaces
Empirical studies and evaluations of new interaction designs
Evaluation methods and metrics (e.g., evaluation questionnaire design)
Psychological aspects in user-centric evaluation
Case studies
IntRS’26 welcomes submissions that fall in the following three major categorizations:
1. Research Papers: should present original work that has not been previously published, is not under review, and will not be submitted elsewhere during the review process. Long papers (10 or more "standard" pages, including references) should make a clear and novel contribution and be positioned with respect to the state of the art. Short papers (5-9 "standard" pages, including references) may present early-stage research, promising ideas, negative results, or thought-provoking perspectives that can stimulate discussion and future work.
2. Reproducibility and Resource Papers: this category includes submissions focused on tools, datasets, benchmarks, and reproducibility studies, including newly developed resources, significant updates to existing tools, or systematic evaluations of published work. Long papers (10 or more "standard" pages, including references) should present substantial contributions, such as comprehensive tools, large-scale datasets, or in-depth reproducibility analyses. Short papers (5-9 "standard" pages, including references) may describe smaller- scale resources, focused tool descriptions, or preliminary reproducibility efforts of interest to the community.
3. Position Papers: are intended for short, critical, or visionary contributions that highlight future directions, emerging challenges, or reflective perspectives on the field. Position papers may be up to 4 pages, including references, and should aim to spark discussion and inspire future research, even in the absence of experimental results.
All submissions must be written in English, formatted as PDF files, and follow the CEUR-WS single-column conference format, available as a compressed archive and an Overleaf template.
Submissions will undergo a double-blind peer review process. Review criteria include relevance to the workshop, originality, significance of the contribution, technical soundness, clarity of presentation, quality of references, and reproducibility. Authors are encouraged to share code and supplementary material via an anonymous repository (e.g., see here), to support reproducibility.
Submissions that are not properly anonymized, fail to follow the required formatting, or disregard these guidelines may be rejected without review.
Accepted long and short papers will be published in the CEUR Workshop Proceedings and presented in the main workshop program. Position papers will not be included in the published proceedings, but a selection of these may be invited for oral presentations.
Please note that at least one author of each accepted paper must register for and attend the workshop in order to present the work. We expect authors, reviewers, and organizers to adhere to the ACM Conflict of Interest Policy and the ACM Code of Ethics and Professional Conduct.
Organizers
Peter Brusilovsky - peterb@pitt.edu, School of Information Sciences, University of Pittsburgh, USA
Marco de Gemmis - marco.degemmis@uniba.it, Dept. of Computer Science, University of Bari Aldo Moro, Italy
Alexander Felfernig - alexander.felfernig@ist.tugraz.at, Software Engineering and AI, Graz University of Technology, Austria
Pasquale Lops - pasquale.lops@uniba.it, Dept. of Computer Science, University of Bari Aldo Moro, Italy
Marco Polignano - marco.polignano@uniba.it, Dept. of Computer Science, University of Bari Aldo Moro, Italy
Giovanni Semeraro - giovanni.semeraro@uniba.it, Dept. of Computer Science, University of Bari Aldo Moro, Italy
Martijn C. Willemsen - M.C.Willemsen@tue.nl, Eindhoven University of Technology, The Netherlands