I have been working for some time at trying to improve the reliability of observational methods of causal inference. The goal of methods of causal inference is to measure the causal effect of an intervention, free of all the possible confounding influences. Methods of causal inference are essential components in the toolkit of any scientist, enabling us to test theories and to evaluate policies. Observational methods of causal inference try to account for possible confounding influences without relying on experimental manipulation or on explicit modeling. Observational methods compare situations with and without the intervention of interest that are similar in all observable dimensions. For example, epidemiologists regularly compare individuals exposed to various levels of a pollutant, but otherwise similar (e.g. in age, education, income, etc). It would be great if observational methods were reliable. The world would be our lab, without having to make it so explicitly by running experiments. Unfortunately, observational methods are currently seen as unreliable. In a particular application, some important confounders may remain unobserved, biasing observational methods in an unknown direction and with an unknown magnitude.
In my work, I try to improve the reliability of observational methods by establishing the size and direction of their bias. I use a two-pronged approach with an empirical leg and a modeling leg. The theoretical leg consists in building realistic models in order to understand and gauge the bias of observational methods. The empirical leg consists in accumulating evidence on the bias of observational methods in precise sets of conditions.
As of writing, my main results are:
Click on the links below to learn more about these results and the empirical and theoretical legs of my work.