* Each poster will be presented in both sessions.Posters *Learning with Rejection*. Corinna Cortes, Giulia DeSalvo, and Mehryar Mohri.*Inferring Complex Networks of Influence: Understanding Green Investment Tipping Points*. Amir Sani and Antoine Mandel.*Macroeconomic Agent Based Model Calibration using Iterated Surrogates*. Amir Sani, Francesco Lamperti, Antoine Mandel and Andrea Roventini.*Estimating Individual Treatment Effect: Generalization Bounds and Algorithms*. Uri Shalit, Fredrik D. Johansson and David Sontag.
*Deep Convolutional Neural Networks for Pairwise Causality*. Karamjit Singh, Garima Gupta, Lovekesh Vig, Gautam Shroff, and Puneet Agarwal**(voted best poster by participants)**.*Idiomatic Application of Causal Analysis to Social Media Timelines: Opportunities and Challenges*. Golnoosh Farnadi and Emre Kiciman.*Learning Causal Graphs with Constraints*. Murat Kocaoglu, Alexandros G. Dimakis and Sriram Vishwanath.*Causal Compression*. Aleksander Wieczorek and Volker Roth.*Towards A Complete Identification Algorithm for Missing Data Problems*. Ilya Shpitser and James M. Robins.*Predicting the Effect of Interventions Using Invariance Principles for Nonlinear Models*. Christina Heinze-Deml, Jonas Peters and Nicolai Meinshausen.*Curing the Curse of Non-Recursiveness in Structural Causal Models*. Stephan Bongers, Jonas Peters, Bernhard Schölkopf and Joris M. Mooij.*User Model-Based Intent-Aware Metrics for Multilingual Search Evaluation*. Alexey Drutsa, Andrey Shutovich, Philipp Pushnyakov, Evgeniy Krokhalyov, Gleb Gusev and Pavel Serdyukov.*Weighted Gaussian Process for Estimating Treatment Effect*. Junfeng Wen, Negar Hassanpour and Russell Greiner.*Deep Counterfactual Prediction using Instrumental Variables*. Jason Hartford, Greg Lewis, Kevin Leyton-Brown and Matt Taddy.*Validation of knock-out predictions in large-scale gene perturbation experiments*. Philip Versteeg, Sach Mukherjee and Joris M. Mooij.*Large-scale Validation of Counterfactual Learning Methods: A Test-Bed*. Damien Lefortier, Xiaotao Gu, Adith Swaminathan, Thorsten Joachims and Maarten de Rijke.*Probabilistic Matching: Incorporating Uncertainty to Correct for Selection Bias*. Hui Fen Tan, Giles J. Hooker and Martin T. Wells.*Automated Tuning of Ad Auctions*. Dilan Gorur, Debabrata Sengupta, Levi Boyles, Patrick Jordan, Eren Manavoglu, Elon Portugaly, Meng Wei and Yaojia Zhu.*Learning Causal Models from Existing Randomized Experiments: Meta-analysis Using Regularized Instrumental Variables*. Alexander Peysakhovich and Dean Eckles.
TBA
A theory of contextual
interventions has developed and matured to the point where contextual bandits
can be routinely deployed to solve appropriate problems. A more general
theory of contextual interventions in complex settings appears desirable and is
under development leading to developments in two new areas: - Sequential decision making around deviations from existing solutions
- Global exploration strategies for arbitrary contexts.
In this preliminary research we'll present early results on extracting repeatable probabilistic templates from global media-event sequences. Such patterns could hint on some weak forms of causality in the global social dynamics. As a basis, we are using the evolving graph of interlinked events generated by the "Event Registry" system (eventregistry.org), where each event is represented as an object composed from three main components: social, topical and temporal. In the analysis we will show early results on the structure of the problem and the spectrum of extracted templates from simple to hard ones.
We develop a causal inference approach to recommender systems. Observational recommendation data contains two sources of information: which items each user decided to look at and which of those items each user liked. We assume these two types of information come from different models---the exposure data comes from a model by which users discover items to consider; the click data comes from a model by which users decide which items they like. Traditionally, recommender systems use the click data alone (or ratings data) to infer the user preferences. But this inference is biased by the exposure data, i.e., that users do not consider each item independently at random. We use causal inference to correct for this bias. On real-world data, we demonstrate that causal inference for recommender systems leads to improved generalization to new data. |