A regular international causal inference seminar. Sign up to our mailing list to receive announcements.
All seminars are on Tuesdays at 8:30 am PT / 11:30 am ET / 3:30 pm UTC / 11:30 pm Beijing time. Please note: Due to recent daylight-saving time changes, the meeting time in your local time zone may have shifted.
Zoom link and other details are provided below. Past talks are available here. Recordings of past webinars are available on our YouTube channel (subscribe to get notified!).
Tuesday, May 05, 2026: OCIS+INI joint webinar
- Speaker: Chan Park (University of Illinois Urbana-Champaign)
- Time: This event starts at 8:30 am PT/ 11:30 am ET/ 4:30 pm London time/ 11:30 pm Beijing time
- Zoom details: Link to join, Meeting ID: 819 2387 7168, Passcode: Newton1
- Title: Distributional Balancing for Causal Inference: A Unified Framework via Characteristic Function Distance
- Abstract: Weighting methods are essential tools for estimating causal effects in observational studies, with the goal of balancing pre-treatment covariates across treatment groups. Traditional approaches pursue this objective indirectly, for example, via inverse propensity score weighting or by matching a finite number of covariate moments, and therefore do not guarantee balance of the full joint covariate distributions. Recently, distributional balancing methods have emerged as robust, nonparametric alternatives that directly target alignment of entire covariate distributions, but they lack a unified framework, formal theoretical guarantees, and valid inferential procedures. We introduce a unified framework for nonparametric distributional balancing based on the characteristic function distance (CFD) and show that widely used discrepancy measures, including the maximum mean discrepancy and energy distance, arise as special cases. Our theoretical analysis establishes conditions under which the resulting CFD-based weighting estimator achieves root-N consistency. Since the standard bootstrap may fail for this estimator, we propose subsampling as a valid alternative for inference. We further extend our approach to an instrumental variable setting to address potential unmeasured confounding. Finally, we evaluate the performance of our method through simulation studies and a real-world application, where the proposed estimator performs well and exhibits results consistent with our theoretical predictions.
[Paper]
Tuesday, May 12, 2026: OCIS+INI joint webinar
- Speaker: Gary Chan (University of Washington)
- Time: This event starts at 8:30 am PT/ 11:30 am ET/ 4:30 pm London time/ 11:30 pm Beijing time
- Zoom details: Link to join, Meeting ID: 819 2387 7168, Passcode: Newton1
(Details TBA)
Tuesday, May 19, 2026:
- Speaker: Naoki Egami (Massachusetts Institute of Technology)
- Details: Zoom link, Meeting ID: 968 8371 7451, Passcode: 414559
- Title: Conformal Policy Learning with Distribution-Free Safety Guarantees: Application to AI-Powered Interventions
- Abstract: Generative AI is emerging as a new class of intervention in the social sciences, with applications designed to change attitudes and behaviors through scalable, personalized interactions. For example, conversational agents have been used to reduce political polarization and improve workplace productivity. At the same time, recent empirical studies highlight an important risk: while such interventions may benefit many individuals and tasks, they may also harm others. How, then, can AI interventions be deployed safely?
In this paper, we develop a new statistical framework, conformal policy learning, to deliver pre-specified safety guarantees when deciding whether individuals should receive a new intervention or the status quo. For instance, a researcher may require that the probability that an individual is harmed by the chosen intervention is below 1%. Using tailored conformal hypothesis testing, our method provides finite-sample safety guarantees under the standard exchangeability assumption, without relying on any modeling assumptions. It also achieves asymptotically optimal power or welfare maximization when the conditional expectation functions of outcomes are correctly specified. Thus, our treatment assignment rule is guaranteed to be safe in finite samples while attaining optimality under standard modeling assumptions. In practice, our framework enables researchers to deploy AI safely by assigning AI interventions only to people and tasks that satisfy user-specified safety requirements, and by reverting to the status quo otherwise. This offers a middle ground between two undesirable extremes: unfiltered deployment that ignores AI risks and total avoidance due to safety concerns. We illustrate the method through extensive simulations and an experiment in which randomly assigned AI chatbots are used to reduce conspiracy beliefs. This is joint work with Ying Jin.
- Discussant: Eli Ben-Michael (Carnegie Mellon University)
Tuesday, May 26, 2026: OCIS+INI joint webinar (Details TBA)
Tuesday, June 02, 2026:
- Speaker: Suhas Vijaykumar (U.C. San Diego)
- Details: TBA
Tuesday, June 09, 2026: OCIS+INI joint webinar
- Speaker: Yixin Wang (University of Michigan)
- Details: TBA
Tuesday, June 16, 2026: OCIS+INI joint webinar (Details TBA)
Tuesday, June 23, 2026:
- Speaker: Falco Bargagli Stoffi (University of California, Los Angeles)
- Zoom details: Zoom link (webinar ID: 968 8371 7451, password: 414559)
- Title: Stable Discovery of Treatment Effect Modifiers
- Abstract: Identifying covariates that modify treatment effects is a critical problem in causal inference. Yet existing data-adaptive methods lack rigorous error control, risking spurious findings that fail to replicate. We propose a method combining pseudo-outcomes with a novel cross-fitted stability selection algorithm to achieve finite-sample false discovery control for effect modifiers. We prove that selection probabilities are asymptotically unbiased, converging to oracle probabilities at parametric rate under doubly robust pseudo-outcome estimation. False discovery is controlled at the nominal level while maintaining power to detect genuine heterogeneity. We demonstrate the method on simulated and real-world data.
Recordings of our past webinars are available on YouTube. Follow us on YouTube to stay notified!
To stay up-to-date about upcoming presentations and receive Zoom invitations please join our mailing list. You will receive an email to confirm your subscription. If you are already subscribed to our mailing list and would like to unsubscribe, you may unsubcribe here.
If there is anyone you would like to hear at the Online Causal Inference Seminar, you may let us know here.
Please check out our opportunities in causal inference page for conferences, workshops, and job listings! If you would like us to list an opportunity, please email us at onlinecausalinferenceseminar@gmail.com.
Recordings of our past webinars are available on YouTube. Follow us on YouTube to stay notified!
The seminars are held on Zoom and last 60 minutes. Our seminars will typically follow one of three formats:
Format 1: single presentation
45 minutes of presentation
10 minutes of discussion, led by an invited discussant
Q&A, time permitting
Format 2: two presentations
Two presentations, 25-30 minutes each
Q&A, time permitting
Format 3: interview
40-45 minute conversation with leader in causal inference
15-20 minutes of Q&A
A moderator collects audience questions in Q&A section.
Moderators may ask you to unmute yourself to participate in the discussion. Please note that you may be recorded if you activate your audio or video during the seminar.
Oliver Dukes (Ghent), Naoki Egami (Columbia), Aditya Ghosh (Stanford), Guido Imbens (Stanford), Ying Jin (Wharton), Sara Magliacane (U of Amsterdam), Razieh Nabi (Emory), Ema Perkovic (U of Washington), Dominik Rothenhäusler (Stanford), Rahul Singh (Harvard), Mats Stensrud (EPFL), Qingyuan Zhao (Cambridge)
Susan Athey (Stanford), Guillaume Basse (Stanford), Peter Bühlmann (ETH Zürich), Peng Ding (Berkeley), Andrew Gelman (Columbia), Guido Imbens (Stanford), Fabrizia Mealli (Florence), Nicolai Meinshausen (ETH Zürich), Maya Petersen (Berkeley), Thomas Richardson (UW), Dominik Rothenhäusler (Stanford), Jas Sekhon (Berkeley/Yale), Stefan Wager (Stanford)
If you have feedback or suggestions, please e-mail us at onlinecausalinferenceseminar@gmail.com.
We gratefully acknowledge support by the Stanford Department of Statistics and the Stanford Data Science Initiative.
You can join the webinar by clicking the link on the webpage. If you signed up to the mailing list, you will receive an email with the link before the webinar begins. On Tuesday, you should join the seminar shortly before the start time 8:30 am PT.
Due to high demand, we will host the seminar as a Zoom webinar. As an attendee, you will not be able to unmute yourself. If you have questions about the content of the talk, please submit the questions using the Zoom Q&A feature. Time permitting, and depending on the volume of questions, the moderator will either ask your question for you or confirm with you to ask the question yourself and unmute you at a suitable time. In some meetings, the collaborators of the speaker will be online to address your questions in Q&A. Note that Q&A will be moderated by us so you will only be able to see some of the questions of the other attendees. If you want to send messages to the moderators during the seminar, please use the Zoom chat feature.
If you have not used Zoom before, we highly recommend downloading and installing the Zoom client before the meeting. Additional instructions on how to use Zoom during a webinar can be found here. Note that for the online causal inference seminar, we do not require registration in advance so you will be able to join by simply clicking the link on this webpage or in the email.
If you have further questions, please drop us an email at onlinecausalinferenceseminar@gmail.com