Invited Speakers

Exploration in Recommender Systems

Recommender systems are known to suffer from closed feedback loop effect,  exploiting popular choices while leaving a large majority of the item corpus and creator base under-discovered. We study exploration in overcoming such biases to discover worthy fresh and tail items/creators in recommender systems.  We built dedicated exploration stacks in an industrial recommendation platform to insulate the initial exposure of fresh and tail items, and introduced exploration into the main recommendation stacks to lower the growth barriers for these items.  We set two high-level objectives of exploration in increasing coverage as well as efficiency, and discuss technologies including building realtime learning systems, leveraging uncertainty and bandits, as well as improving generalization to maximize these objectives. If time permits, we will also touch upon new experiment designs to systematically compare performances of different exploration treatments and future work in studying creators and ecosystem health. 

Minmin Chen is a research scientist in Google Brain. She leads a team working on reinforcement learning and online learning for recommender systems. Her passion lies in innovating and realizing RL and ML techniques to improve long term user experience/journey on recommendation platforms and optimize long term values of Google recommendation products. She leads both fundamental and applied research, delivered ~100 launches within different Google recommendation products since 2017. 

Contracts, Delegation, and Incentives in Decentralized Machine Learning

Contract theory is the study of incentives when parties transact in the presence of private information.  We augment classical contract theory to incorporate a role for learning from data, where the overall goal of the adaptive mechanism is to obtain desired statistical behavior.  We consider applications of this framework to problems in federated learning, the delegation of data collection, and recommendation systems.  We design optimal and near-optimal contracts that deal with two fundamental machine learning challenges in decentralized settings: the lack of certainty in the assessment of model quality and the lack of knowledge regarding the optimal performance of any model.

Michael I. Jordan is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley. He received his Masters in Mathematics from Arizona State University, and earned his PhD in Cognitive Science in 1985 from the University of California, San Diego. He was a professor at MIT from 1988 to 1998. His research interests bridge the computational, statistical, cognitive, biological and social sciences. Prof. Jordan is a member of the National Academy of Sciences, a member of the National Academy of Engineering, a member of the American Academy of Arts and Sciences, and a Foreign Member of the Royal Society. He is a Fellow of the American Association for the Advancement of Science. He was the inaugural winner of the World Laureates Association (WLA) Prize in 2022. He received the Ulf Grenander Prize from the American Mathematical Society in 2021, the IEEE John von Neumann Medal in 2020, the IJCAI Research Excellence Award in 2016, the David E. Rumelhart Prize in 2015, and the ACM/AAAI Allen Newell Award in 2009. He gave the Inaugural IMS Grace Wahba Lecture in 2022, the IMS Neyman Lecture in 2011, and an IMS Medallion Lecture in 2004. He was a Plenary Lecturer at the International Congress of Mathematicians in 2018.

In 2016, Prof. Jordan was named the "most influential computer scientist" worldwide in an article in Science, based on rankings from the Semantic Scholar search engine.

Causal Inference for Trustworthy Recommender Systems

Data-driven recommender systems have exhibited remarkable success across diverse Web applications, primarily due to their exceptional capability in tailoring personalized information delivery. Nevertheless, these systems continue to grapple with trustworthiness issues, including various biases, out-of-distribution (OOD) shifts, and vulnerability to shilling attacks. Such issues cause unfair recommendations, degrade user experience, and hurt the benefits of users and the recommender platforms. To address these pressing concerns, we embrace causal recommender modeling. In contrast to purely data-driven methods, causal recommender systems delve into the mechanics of data generation, employing causal techniques to mitigate a spectrum of trustworthiness issues. This presentation will delve into existing endeavors that harness causal techniques for debiasing, capturing OOD shifts, and shilling attacks. Moreover, this presentation will cast light on prospective directions of trustworthy recommendations, such as trustworthy LLMs for recommendation and generative recommendation.

Dr. Wenjie Wang is a research fellow at the School of Computing, National University of Singapore. He received Ph.D. in Computer Science from National University of Singapore, supervised by Prof. Tat-Seng Chua. Dr. Wang was a winner of Google Ph.D. Fellowships and NUS Dean’s Graduate Research Excellence Award. His research interests cover recommender systems, data mining, and causal inference. His works appear in top conferences and journals such as SIGIR, ACMMM, KDD, WWW, and TOIS, one of which has been selected into ACMMM 2019 Best Paper Final List.