Views Navigation

Event Views Navigation

Calendar of Events

S Sun

M Mon

T Tue

W Wed

T Thu

F Fri

S Sat

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

Online Events Jonathan Niles-Weed

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

Statistics and Data Science Seminar Series Ery Arias-Castro

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

Online Events Sébastien Bubeck

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

Statistics and Data Science Seminar Series Alexandre d'Aspremont

0 events,

Matrix Concentration for Products

Jonathan Niles-Weed (NYU)
online

Abstract: We develop nonasymptotic concentration bounds for products of independent random matrices. Such products arise in the study of stochastic algorithms, linear dynamical systems, and random walks on groups. Our bounds exactly match those available for scalar random variables and continue the program, initiated by Ahlswede-Winter and Tropp, of extending familiar concentration bounds to the…

Find out more »

On Using Graph Distances to Estimate Euclidean and Related Distances

Ery Arias-Castro (University of California, San Diego)
online

Abstract: Graph distances have proven quite useful in machine learning/statistics, particularly in the estimation of Euclidean or geodesic distances. The talk will include a partial review of the literature, and then present more recent developments on the estimation of curvature-constrained distances on a surface, as well as on the estimation of Euclidean distances based on…

Find out more »

How to Trap a Gradient Flow

Sébastien Bubeck (Microsoft Research)
online

Abstract: In 1993, Stephen A. Vavasis proved that in any finite dimension, there exists a faster method than gradient descent to find stationary points of smooth non-convex functions. In dimension 2 he proved that 1/eps gradient queries are enough, and that 1/sqrt(eps) queries are necessary. We close this gap by providing an algorithm based on…

Find out more »

Naive Feature Selection: Sparsity in Naive Bayes

Alexandre d'Aspremont (ENS, CNRS)
online

Abstract: Due to its linear complexity, naive Bayes classification remains an attractive supervised learning method, especially in very large-scale settings. We propose a sparse version of naive Bayes, which can be used for feature selection. This leads to a combinatorial maximum-likelihood problem, for which we provide an exact solution in the case of binary data,…

Find out more »


MIT Institute for Data, Systems, and Society
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764