Loading Events
Find Events

Event Views Navigation

Past Events

Events List Navigation

March 2018

When Inference is Tractable

March 16 @ 11:00 am - 12:00 pm

David Sontag (MIT)

MIT Building E18, Room 304

Abstract: A key capability of artificial intelligence will be the ability to reason about abstract concepts and draw inferences. Where data is limited, probabilistic inference in graphical models provides a powerful framework for performing such reasoning, and can even be used as modules within deep architectures. But, when is probabilistic inference computationally tractable? I will present recent theoretical results that substantially broaden the class of provably tractable models by exploiting model stability (Lang, Sontag, Vijayaraghavan, AI Stats ’18), structure in…

Find out more »

The Power of Multiple Samples in Generative Adversarial Networks

March 13 @ 3:00 pm - 4:00 pm

Sewoong Oh ( University of Illinois, Urbana-Champaign )


Abstract We bring the tools from Blackwell’s seminal result on comparing two stochastic experiments from 1953, to shine a new light on a modern application of great interest: Generative Adversarial Networks (GAN). Binary hypothesis testing is at the center of training GANs, where a trained neural network (called a critic) determines whether a given sample is from the real data or the generated (fake) data. By jointly training the generator and the critic, the hope is that eventually, the trained…

Find out more »

Statistical estimation under group actions: The Sample Complexity of Multi-Reference Alignment

March 9 @ 11:00 am - 12:00 pm

Afonso Bandeira (NYU)

MIT Building E18, Room 304

Abstract: Many problems in signal/image processing, and computer vision amount to estimating a signal, image, or tri-dimensional structure/scene from corrupted measurements. A particularly challenging form of measurement corruption are latent transformations of the underlying signal to be recovered. Many such transformations can be described as a group acting on the object to be recovered. Examples include the Simulatenous Localization and Mapping (SLaM) problem in Robotics and Computer Vision, where pictures of a scene are obtained from different positions andorientations; Cryo-Electron…

Find out more »

Women in Data Science (WiDS) – Cambridge, MA

March 5

Microsoft NERD Center

This one day, multi city, technical conference is organized and hosted by MIT IDSS, Harvard IACS and Microsoft NERD (in conjunction with WiDS Stanford).

Find out more »

One and two sided composite-composite tests in Gaussian mixture models

March 2 @ 11:00 am - 12:00 pm

Alexandra Carpentier (Otto von Guericke Universitaet)

MIT Building E18, Room 304

Abstract: Finding an efficient test for a testing problem is often linked to the problem of estimating a given function of the data. When this function is not smooth, it is necessary to approximate it cleverly in order to build good tests. In this talk, we will discuss two specific testing problems in Gaussian mixtures models. In both, the aim is to test the proportion of null means. The aforementioned link between sharp approximation rates of non-smooth objects and minimax testing…

Find out more »
February 2018

Safe Learning in Robotics

February 27 @ 3:00 pm - 4:00 pm

Claire Tomlin ( University of California, Berkeley )


Abstract A great deal of research in recent years has focused on robot learning. In many applications, guarantees that specifications are satisfied throughout the learning process are paramount. For the safety specification, we present a controller synthesis technique based on the computation of reachable sets using optimal control. We show recent results in system decomposition to speed up this computation, and how offline computation may be used in online applications. We then present a method combining reachability with machine learning,…

Find out more »

Provably Secure Machine Learning

February 26 @ 4:00 pm - 5:00 pm

Jacob Steinhardt (Stanford)

32-G449 (Kiva/Patel)

Abstract:  The widespread use of machine learning systems creates a new class of computer security vulnerabilities where, rather than attacking the integrity of the software itself, malicious actors exploit the statistical nature of the learning algorithms. For instance, attackers can add fake data (e.g. by creating fake user accounts), or strategically manipulate inputs to the system once it is deployed. So far, attempts to defend against these attacks have focused on empirical performance against known sets of attacks. I will argue that…

Find out more »

Optimization’s Implicit Gift to Learning: Understanding Optimization Bias as a Key to Generalization

February 23 @ 11:00 am - 12:00 pm

Nathan Srebro-Bartom (TTI-Chicago)

MIT Building E18, Room 304

Abstract: It is becoming increasingly clear that implicit regularization afforded by the optimization algorithms play a central role in machine learning, and especially so when using large, deep, neural networks. We have a good understanding of the implicit regularization afforded by stochastic approximation algorithms, such as SGD, and as I will review, we understand and can characterize the implicit bias of different algorithms, and can design algorithms with specific biases. But in this talk I will focus on implicit biases…

Find out more »

Submodular Optimization: From Discrete to Continuous and Back

February 20 @ 3:00 pm - 4:00 pm

Amin Karbasi (Yale University)


Abstract Many procedures in statistics and artificial intelligence require solving non-convex problems. Historically, the focus has been to convexify the non-convex objectives. In recent years, however, there has been significant progress to optimize non-convex functions directly. This direct approach has led to provably good guarantees for specific problem instances such as latent variable models, non-negative matrix factorization, robust PCA, matrix completion, etc. Unfortunately, there is no free lunch and it is well known that in general finding the global optimum…

Find out more »

User-friendly guarantees for the Langevin Monte Carlo

February 16 @ 11:00 am - 12:00 pm

Arnak Dalalyan (ENSAE-Crest)

MIT Building E18, Room 304

Abstract: In this talk, I will revisit the recently established theoretical guarantees for the convergence of the Langevin Monte Carlo algorithm of sampling from a smooth and (strongly) log-concave density. I will discuss the existing results when the accuracy of sampling is measured in the Wasserstein distance and provide further insights on relations between, on the one hand, the Langevin Monte Carlo for sampling and, on the other hand, the gradient descent for optimization. I will also present non-asymptotic guarantees for the accuracy…

Find out more »
+ Export Events

© MIT Institute for Data, Systems, and Society | 77 Massachusetts Avenue | Cambridge, MA 02139-4307 | 617-253-1764 | Design by Opus