Views Navigation

Event Views Navigation

Central Limit Theorems for Smooth Optimal Transport Maps

Tudor Manole (MIT)
E18-304

Abstract: One of the central objects in the theory of optimal transport is the Brenier map: the unique monotone transformation which pushes forward an absolutely continuous probability law onto any other given law. Recent work has identified a class of plugin estimators of Brenier maps which achieve the minimax L^2 risk, and are simple to compute. In this talk, we show that such estimators obey pointwise central limit theorems. This provides a first step toward the question of performing statistical…

Find out more »

A Flexible Defense Against the Winner’s Curse

Tijana Zrnic (Stanford University)
E18-304

Abstract: Across science and policy, decision-makers often need to draw conclusions about the best candidate among competing alternatives. For instance, researchers may seek to infer the effectiveness of the most successful treatment or determine which demographic group benefits most from a specific treatment. Similarly, in machine learning, practitioners are often interested in the population performance of the model that empirically performs best. However, cherry-picking the best candidate leads to the winner's curse: the observed performance for the winner is biased…

Find out more »

Estimating Direct Effects under Interference: A Spectral Experimental Design

Christopher Harshaw (Columbia University)
E18-304

Abstract: From clinical trials to corporate strategy, randomized experiments are a reliable methodological tool for estimating causal effects. In recent years, there has been a growing interest in causal inference under interference, where treatment given to one unit can affect outcomes of other units. While the literature on interference has focused primarily on unbiased and consistent estimation, designing randomized network experiments to insure tight rates of convergence is relatively under-explored for many settings. In this talk, we study the problem…

Find out more »

Scaling Limits of Neural Networks

Boris Hanin (Princeton University)
E18-304

Abstract: Neural networks are often studied analytically through scaling limits: regimes in which taking to infinity structural network parameters such as depth, width, and number of training datapoints results in simplified models of learning. I will survey several such approaches with the goal of illustrating the rich and still not fully understood space of possible behaviors when some or all of the network’s structural parameters are large. Bio: Boris Hanin is an Assistant Professor at Princeton Operations Research and Financial…

Find out more »

IDSS Community Social

Host: Prof. Fotini Christia (IDSS)
E17-399

All IDSS and extended IDSS community members welcome, including students, postdocs, faculty, and staff. Snacks provided!

Find out more »


MIT Institute for Data, Systems, and Society
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764