MIT Stochastics & Statistics Seminar: Peter Bartlett
Abstract: In game-theoretic formulations of prediction problems, a strategy makes a decision, observes an outcome and pays a loss. The aim is to minimize the regret, which is the amount by which the total loss incurred exceeds the total loss of the best decision in hindsight. This talk will focus on the minimax optimal strategy, which minimizes the regret, in three settings: prediction with log loss (a formulation of sequential probability density estimation that is closely related to sequential compression, coding, gambling and investment problems), sequential least squares (where decisions and outcomes lie in a subset of a Hilbert space, and loss is squared distance), and linear regression (where the aim is to predict real-valued labels as well as the best linear function). We obtain the minimax optimal strategy for these problems, and show that it can be efficiently computed.
Bio: Peter Bartlett is a professor in the Division of Computer Science and the Department of Statistics. He is the co-author of the book Learning in Neural Networks: Theoretical Foundations. He has served as associate editor of the journals Machine Learning, Mathematics of Control Signals and Systems, the Journal of Machine Learning Research, the Journal of Artificial Intelligence Research, and the IEEE Transactions on Information Theory. He was awarded the Malcolm McIntosh Prize for Physical Scientist of the Year in Australia for his work in statistical learning theory. He was a Miller Institute Visiting Research Professor in Statistics and Computer Science at U.C. Berkeley in Fall 2001, and a fellow, senior fellow and professor in the Research School of Information Sciences and Engineering at the Australian National University’s Institute for Advanced Studies (1993-2003). He is also an honorary professor in the Department of Computer Science and Electrical Engineering at the University of Queensland.