Loading Events

Past Events › LIDS & Stats Tea Talks

Events Search and Views Navigation

Event Views Navigation

February 2021

Generative Adversarial Training for Gaussian Mixture Models

February 17, 2021 @ 4:00 pm - 5:00 pm

Farzan Farnia (LIDS)

Zoom

ABSTRACT Generative adversarial networks (GANs) learn the distribution of observed samples through a zero-sum game between two machine players, a generator and a discriminator. While GANs achieve great success in learning the complex distribution of image and text data, they perform suboptimally in learning multi-modal distribution-learning benchmarks including Gaussian mixture models (GMMs). In this talk, we propose Generative Adversarial Training for Gaussian Mixture Models (GAT-GMM), a minimax GAN framework for learning GMMs. Motivated by optimal transport theory, we design the…

Find out more »

Localization, Uniform Convexity, and Star Aggregation

February 10, 2021 @ 4:00 pm - 5:00 pm

Suhas Vijaykumar (MIT Sloan)

Zoom

ABSTRACT Offset Rademacher complexities have been shown to imply sharp, data-dependent upper bounds for the square loss in a broad class of problems including improper statistical learning and online learning. We show that in the statistical setting, the offset complexity upper bound can be generalized to any loss satisfying a certain uniform curvature condition; this condition is shown to also capture exponential concavity and self-concordance, uniting several apparently disparate results. By a unified geometric argument, these bounds translate to improper…

Find out more »

December 2020

LIDS & Stats Tea Talk – Xiang Cheng (LIDS)

December 9, 2020 @ 4:00 pm - 4:30 pm

Xiang Cheng (LIDS)

Online

Tea talks are 20-minute-long informal chalk-talks for the purpose of sharing ideas and making others aware of some of the topics that may be of interest to the LIDS and Stats audience. If you are interested in presenting in the upcoming tea talks, please email lids_stats_tea@mit.edu.

Find out more »

Train Simultaneously, Generalize Better: Stability of Gradient-Based Minimax Learners

December 2, 2020 @ 4:00 pm - 4:30 pm

Farzan Farnia (LIDS)

Online

ABSTRACT The success of minimax learning problems of generative adversarial networks (GANs) and adversarial training has been observed to depend on the minimax optimization algorithm used for their training. This dependence is commonly attributed to the convergence speed and robustness properties of the underlying optimization algorithm. In this talk, we present theoretical and numerical results indicating that the optimization algorithm also plays a key role in the generalization performance of the trained minimax model. To this end, we analyze the…

Find out more »

November 2020

Sensor-based Control for Fast and Agile Aerial Robotics

November 18, 2020 @ 4:00 pm - 4:30 pm

Ezra Tal (LIDS)

Online

ABSTRACT In recent years, autonomous unmanned aerial vehicles (UAVs) that can execute aggressive (i.e., fast and agile) maneuvers have attracted significant attention. We focus on the design of control algorithms for accurate tracking of such maneuvers. This problem is complicated by aerodynamic effects that significantly impact vehicle dynamics at high speeds. In contrast, typical multicopter controllers that operate at low speeds may neglect vehicle aerodynamics all together. We propose a sensor-based approach to account for high-speed aerodynamics. Our controller directly…

Find out more »

Personalized Federated Learning: A Model-Agnostic Meta-Learning Approach

November 4, 2020 @ 4:00 pm - 4:30 pm

Alireza Fallah (LIDS)

Online

ABSTRACT In Federated Learning, we aim to train models across multiple computing units (users), while users can only communicate with a common central server, without exchanging their data samples. This mechanism exploits the computational power of all users and allows users to obtain a richer model as their models are trained over a larger set of data points. However, this scheme only develops a common output for all the users, and, therefore, it does not adopt the model to each…

Find out more »

October 2020

Regulating Algorithmic Filtering on Social Media

October 28, 2020 @ 4:00 pm - 4:30 pm

Sarah Cen (LIDS)

Online

ABSTRACT Social media platforms moderate content using a process known as algorithmic filtering (AF). While AF has the potential to greatly improve the user experience, it has also drawn intense scrutiny for its roles in, for example, spreading fake news, amplifying hate speech, and facilitating digital red-lining. However, regulating AF can be harmful to the social media ecosystem by interfering with personalization, lowering profits, and restricting free speech. We are interested in whether or not it is possible to design…

Find out more »

Kernel Approximation Over Algebraic Varieties

October 21, 2020 @ 4:00 pm - 4:30 pm

Jason Altschuler (LIDS)

Online

ABSTRACT Low-rank approximation of the Gaussian kernel is a core component of many data-science algorithms. Often the approximation is desired over an algebraic variety (e.g., in problems involving sparse data or low-rank data). Can better approximations be obtained in this setting? In this talk, I’ll show that the answer is yes: The improvement is exponential and controlled by the variety’s Hilbert dimension. Joint work with Pablo Parrilo BIOGRAPHY Jason is a PhD student in LIDS, advised by Pablo Parrilo. His…

Find out more »

Provably Faster Convergence of Adaptive Gradient Methods

October 14, 2020 @ 4:00 pm - 4:30 pm

Jingzhao Zhang (LIDS)

Online

ABSTRACT While stochastic gradient descent (SGD) is still the de facto algorithm in deep learning, adaptive methods like Adam have been observed to outperform SGD across important tasks, such as NLP models. The settings under which SGD performs poorly in comparison to adaptive methods are not well understood yet. Instead, recent theoretical progress shows that SGD is minimax optimal under canonical settings. In this talk, we provide empirical and theoretical evidence that a different smoothness condition or a heavy-tailed distribution…

Find out more »

Fast Learning Guarantees for Weakly Supervised Learning

October 7, 2020 @ 4:00 pm - 4:30 pm

Joshua Robinson (CSAIL)

Online

ABSTRACT We study generalization properties of weakly supervised learning. That is, learning where only a few true labels are present for a task of interest but many more “weak” labels are available. In particular, we show that embeddings trained using weak labels only can be fine-tuned on the downstream task of interest at the fast learning rate of O(1/n) where n denotes the number of labeled data points for the downstream task. This acceleration sheds light on the sample efficiency…

Find out more »
+ Export Events

© MIT Institute for Data, Systems, and Society | 77 Massachusetts Avenue | Cambridge, MA 02139-4307 | 617-253-1764 |
      
Accessibility