Views Navigation

Event Views Navigation

Emergent outlier subspaces in high-dimensional stochastic gradient descent

Reza Gheissari, Northwestern University
E18-304

Abstract:  It has been empirically observed that the spectrum of neural network Hessians after training have a bulk concentrated near zero, and a few outlier eigenvalues. Moreover, the eigenspaces associated to these outliers have been associated to a low-dimensional subspace in which most of the training occurs, and this implicit low-dimensional structure has been used as a heuristic for the success of high-dimensional classification. We will describe recent rigorous results in this direction for the Hessian spectrum over the course…

Find out more »

Matrix displacement convexity and intrinsic dimensionality

Yair Shenfeld, Brown University
E18-304

Abstract: The space of probability measures endowed with the optimal transport metric has a rich structure with applications in probability, analysis, and geometry. The notion of (displacement) convexity in this space was discovered by McCann, and forms the backbone of this theory.  I will introduce a new, and stronger, notion of displacement convexity which operates on the matrix level. The motivation behind this definition is to capture the intrinsic dimensionality of probability measures which could have very different behaviors along…

Find out more »


MIT Statistics + Data Science Center
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764