Schedule

9.00 - Introduction

9.10 - Invited talk : Off-Policy Correction for a REINFORCE Recommender System, Minmin Chen (Google Brain)

9.40 - On the Design of Estimators for Off-Policy Evaluation - Nikos Vlassis (Netflix)*; Aurelien Bibaut (UC Berkeley); Tony Jebara (Netflix)

10.05 - BEARS: Towards an Evaluation Framework for Bandit-based Interactive Recommender Systems - Andrea P Barraza (Insight Centre for Data Analytics)*

10.30 - Coffee Break / posters

11.20 - A More Comprehensive Offline Evaluation of Active Learning in Recommender Systems- Diego Carraro (Insight Centre for Data Analytics)*; Derek Bridge (Insight Centre for Data Analytics)

11.40 - On measuring polarization using recommender system scores - Robert Keyes (Shopify Inc.); Trish Gillett (Shopify Inc.); Putra Manggala (Shopify Inc.)*

12.00 - Surprise Announcement!

12.10 -Lunch break

14.00 - Invited talk: Offline and Online Performance of Contextual Bandit Algorithm in Personalized Ranking, Tao Ye, Mohit Singh (Pandora)

14.30 - RecoGym: A Reinforcement Learning Environment for the problem of Product Recommendation in Online Advertising - Flavian Vasile (Criteo)*

14.50 - Monte Carlo Estimates of Evaluation Metric Error and Bias - Mucun Tian (Boise State University)*; Michael D Ekstrand (Boise State University)

15.15 - Coffee break / posters

16.00 - Invited talk: Correlation vs Causation in Recommender Systems, Yves Raimond (Netflix)

16.20 - CounterFactual Regression with Importance Sampling Weights - Negar Hassanpour (University of Alberta)*; Russell Greiner (U Alberta)

16.40 - From User Experience to Offline Metrics and Back Again: A Research Agenda - Joseph Konstan (University of Minnesota)*

17.00 - Panel / fireside chat with invited speakers

17.20 - End of workshop