Workshop at The 18th ACM International Conference on Web Search and Data Mining (WSDM)
Recommendation systems play a significant role in shaping online experiences and decision-making, making it crucial to develop systems that are trustworthy and transparent. This workshop will focus on enhancing trust in recommendation systems by addressing various technical and ethical challenges. We will explore fairness, explainability, content safety, algorithm transparency, and the broader societal impact of recommendation systems.
Transparency, Explainability, and Interpretability in ML Models: Techniques to improve users' and developers' understanding of how recommendation systems function.
Security and Privacy Concerns in ML Models: Safeguarding user data and ensuring security within the recommendation systems, including addressing privacy concerns.
Moderation and Content Safety: Best practices for moderating content and ensuring that recommendations do not promote harmful or inappropriate material.
Data Poisoning and Adversarial Examples: Exploring vulnerabilities in recommendation systems and strategies to defend against adversarial attacks.
Audit Techniques for Data and ML Models: Best practices for auditing data and algorithms to ensure fairness, safety, and compliance with ethical standards.
Fairness and Exclusion Studies: Exploring benchmarks, datasets, and methods to ensure recommendation systems are inclusive and fair, particularly for underrepresented groups.
Evaluation for Fair Outcomes: Techniques for evaluating recommendations to ensure equitable outcomes for all users, with a focus on protecting marginalized communities.
Robustness, Safety, and Collective Value Alignment: Ensuring the stability and reliability of recommendation systems, with an emphasis on global inclusivity, particularly for developing countries and underrepresented communities.
Algorithm Controllability and Interpretability: Methods to make recommendation algorithms more understandable and controllable by developers and end-users.
Social Good, Participatory AI: Applications of AI for social benefit, including participatory approaches to develop systems that prioritize the needs of diverse user groups, particularly in domains such as healthcare, financial services, and legal systems.
Submission link: https://easychair.org/conferences/?conf=trrs2025
We invite submissions of extended abstracts (1-6 pages) that align with the workshop's themes. We encourage contributions in the following areas:
New theoretical research that advances our understanding of trust in machine learning systems.
Best practices for evaluating the trustworthiness and safety of large-scale machine learning systems.
Practical solutions to address trust issues in deployed systems, including case studies and real-world applications.
Creative applications of new technologies to enhance trust in modern recommendation systems.
Submission guidelines: All submitted papers must
be formatted according to the ACM SIG Proceedings Template -double-column format, with a font size no smaller than 9pt;
be in PDF (make sure that the PDF can be viewed on any platform), and formatted for US Letter size.
Submissions are encouraged to include links or demos as attachments to enhance their presentation. All submissions will undergo a rigorous peer-review process to ensure quality and originality. It is imperative that submitted works are not concurrently under review at any other conference, workshop, or journal and contain original, unpublished contributions.
Submission Deadline: February 16, 2025, Feb 23, 2025
Notification of Acceptance: February 23, 2025, March 2, 2025
Workshop: March 14, 2025
Elisabeth Lex is a professor and PI of the AI for Society Lab at the Institute of Human-Centred Computing at Graz University of Technology. She also serves as Dean of Studies for the interdisciplinary MSc program in Computational Social Systems. Her research focuses on trustworthy AI, machine learning, recommender systems, and computational social science.
Antonio Ferrara is an Assistant Professor at Polytechnic University of Bari, affiliated with the Information Systems Laboratory (SisInfLab).
He mainly focuses on differential privacy, federated learning and its challenges, with a particular interest on its relevance for designing privacy-oriented recommender systems and building knowledge-aware representations of user decision processes. Over the years, he published his works in national and international journals and presented full papers, demos, and tutorials at international conferences, including SIGIR and RecSys.
Enming is currently a staff software engineer with Google Research. He works on machine learning, computer vision, multimodal LLM and information retrieval. He received a PhD degree in electrical engineering from UCSD.
Mingshen leads application and innovation of the trusted/confidential computing technologies at TikTok. Meanwhile, he worked with multiple open-source projects towards building trust. Prior to joining TikTok, Mingshen also published papers and gave talks on topics in the intersection of privacy and security, program analysis, and programming languages. He also serves on Technical Advisory Council and Governing Board of Confidential Computing Consortium.
Madhura Raju is a Staff Product Manager at TikTok Inc., focusing on enhancing feed quality and safety in recommendation systems. She has nearly a decade of experience at the intersection of AI/ML, product management, and engineering. At Microsoft, she scaled the News & Feed recommendation system from 0 to 1, reaching millions of users globally.
Previously, Madhura helped streamline AI infrastructure workflows at Meta Platforms and built core systems infrastructure at Akamai Technologies supporting 300k edge machines. She holds a master’s degree in Computer Science from the University of Pennsylvania.
Bolun Cai leads the application of large models for content moderation at Douyin Group. He is the author of more than 40 peer-reviewed papers in the area of multimodal content understanding, which have been cited over 5,000 times on Google Scholar. From 2022 to 2024, he was ranked as the World's Top 2% most-cited scientist by Stanford University. He has obtained several paper awards, among which are the 2016 ESI Highly Cited and Hot Paper, the Best Student Paper Award at PCM 2016, the Best Paper Finalist at ICIP 2017, and the Top 3% Paper at ICASSP 2023.
Wanrong Zhang is currently a research scientist at TikTok. Her current research focuses on AI safety and privacy. She has contributed to privacy-preserving machine learning, differential privacy, and LLM watermarking, with research spanning both theoretical advancements and real-world applications. Dr. Zhang obtained her PhD from Georgia Institute of Technology. Prior to her current role at TikTok, she was a computing Innovation fellow at Harvard University.
Amit Jaspal is an Engineering Manager and Research Scientist at Meta with 14 years of expertise in building recommendation and information systems. He currently leads the e-commerce recommendations team at Meta, with prior experience in leading video recommendation, ads recommendation, and News Feed recommendation teams at Meta and LinkedIn. His interests lie in the practical applications of Machine Learning, Data Mining, and Information Retrieval to recommender systems. He has also received a research fellowship from the National Center for Supercomputing Applications and the Technological Development of Indian Languages Labs.