Causal Inference Speaker Series - Prof.  Nathan Kallus

Causal Inference Speaker Series - Prof. Nathan Kallus

Debiased Inference on Functionals of Inverse Problems and Applications to Long-Term Causal Inference

By Data Sciences Institute

Date and time

Wednesday, May 29 · 12 - 2:30pm EDT

Location

Data Sciences Institute

10 floor, 700 University Avenue Toronto, ON M5G 1X6 Canada

About this event

Description:

This talk is part of the Causal Inference Emergent Data Science Program.

In the presence of endogeneity, instruments and negative controls can still give us a view onto causal effects, but only indirectly, e.g., as a function whose residuals are orthogonal to instruments. Without imposing (unrealistic) parametric restrictions, these inverse problems are generally ill posed, making it difficult to reliably learn a solution from data. In this talk I discuss how to nonetheless make reliable inferences on linear functionals of (nonparametric) solutions to these inverse problems, such as average effects. Any such parameter admits a doubly robust representation involving the solution to a dual inverse problem that is specific to the functional of interest. We use this to develop debiased estimators that are root-n-asymptotically normal around the parameter as long as either the primal or dual inverse problem is sufficiently well posed compared to the functional complexity of the (generic, nonparametric) hypothesis classes for the solutions to the inverse problems, all without knowledge of which inverse problem is the more well posed or how well posed. The result is enabled by strong guarantees for a new iterated Tikhonov regularized adversarial learner for solutions to inverse problems over general hypothesis classes. I will then discuss the particular problem of using the plethora of A/B tests undertaken on digital platforms for learning better surrogate indices for inference on long-term causal effects from short-term experiments. While this too can be phrased as a functional of an instrumental variable regression, since each A/B test has a fixed size, here we encounter the additional challenge of weak instruments, introducing a non-vanishing bias. We resolve this by learning the nuisances for our debiased estimator using a jackknifed loss function that eliminates this bias and recovers consistency if we have many, albeit weak, instruments.

There will be a lunch reception before the talk, and following the talk, there will be a student-led discussion.


Biography:

Nathan Kallus is an Associate Professor at the Cornell Tech campus of Cornell University in NYC and a Research Director at Netflix. Nathan's research interests include the statistics of optimization under uncertainty, causal inference especially when combined with machine learning, sequential and dynamic decision making, and algorithmic fairness. He holds a PhD in Operations Research from MIT as well as a BA in Mathematics and a BS in Computer Science from UC Berkeley. Before coming to Cornell, Nathan was a Visiting Scholar at USC's Department of Data Sciences and Operations and a Postdoctoral Associate at MIT's Operations Research and Statistics group.


Organized by

Sales Ended