Loading Events

Past Events › LIDS & Stats Tea Talks

Events Search and Views Navigation

Event Views Navigation

December 2020

Train Simultaneously, Generalize Better: Stability of Gradient-Based Minimax Learners

December 2, 2020 @ 4:00 pm - 4:30 pm

Farzan Farnia (LIDS)


ABSTRACT The success of minimax learning problems of generative adversarial networks (GANs) and adversarial training has been observed to depend on the minimax optimization algorithm used for their training. This dependence is commonly attributed to the convergence speed and robustness properties of the underlying optimization algorithm. In this talk, we present theoretical and numerical results indicating that the optimization algorithm also plays a key role in the generalization performance of the trained minimax model. To this end, we analyze the…

Find out more »

November 2020

Sensor-based Control for Fast and Agile Aerial Robotics

November 18, 2020 @ 4:00 pm - 4:30 pm

Ezra Tal (LIDS)


ABSTRACT In recent years, autonomous unmanned aerial vehicles (UAVs) that can execute aggressive (i.e., fast and agile) maneuvers have attracted significant attention. We focus on the design of control algorithms for accurate tracking of such maneuvers. This problem is complicated by aerodynamic effects that significantly impact vehicle dynamics at high speeds. In contrast, typical multicopter controllers that operate at low speeds may neglect vehicle aerodynamics all together. We propose a sensor-based approach to account for high-speed aerodynamics. Our controller directly…

Find out more »

Personalized Federated Learning: A Model-Agnostic Meta-Learning Approach

November 4, 2020 @ 4:00 pm - 4:30 pm

Alireza Fallah (LIDS)


ABSTRACT In Federated Learning, we aim to train models across multiple computing units (users), while users can only communicate with a common central server, without exchanging their data samples. This mechanism exploits the computational power of all users and allows users to obtain a richer model as their models are trained over a larger set of data points. However, this scheme only develops a common output for all the users, and, therefore, it does not adopt the model to each…

Find out more »

October 2020

Regulating Algorithmic Filtering on Social Media

October 28, 2020 @ 4:00 pm - 4:30 pm

Sarah Cen (LIDS)


ABSTRACT Social media platforms moderate content using a process known as algorithmic filtering (AF). While AF has the potential to greatly improve the user experience, it has also drawn intense scrutiny for its roles in, for example, spreading fake news, amplifying hate speech, and facilitating digital red-lining. However, regulating AF can be harmful to the social media ecosystem by interfering with personalization, lowering profits, and restricting free speech. We are interested in whether or not it is possible to design…

Find out more »

Kernel Approximation Over Algebraic Varieties

October 21, 2020 @ 4:00 pm - 4:30 pm

Jason Altschuler (LIDS)


ABSTRACT Low-rank approximation of the Gaussian kernel is a core component of many data-science algorithms. Often the approximation is desired over an algebraic variety (e.g., in problems involving sparse data or low-rank data). Can better approximations be obtained in this setting? In this talk, I’ll show that the answer is yes: The improvement is exponential and controlled by the variety’s Hilbert dimension. Joint work with Pablo Parrilo BIOGRAPHY Jason is a PhD student in LIDS, advised by Pablo Parrilo. His…

Find out more »

Provably Faster Convergence of Adaptive Gradient Methods

October 14, 2020 @ 4:00 pm - 4:30 pm

Jingzhao Zhang (LIDS)


ABSTRACT While stochastic gradient descent (SGD) is still the de facto algorithm in deep learning, adaptive methods like Adam have been observed to outperform SGD across important tasks, such as NLP models. The settings under which SGD performs poorly in comparison to adaptive methods are not well understood yet. Instead, recent theoretical progress shows that SGD is minimax optimal under canonical settings. In this talk, we provide empirical and theoretical evidence that a different smoothness condition or a heavy-tailed distribution…

Find out more »

Fast Learning Guarantees for Weakly Supervised Learning

October 7, 2020 @ 4:00 pm - 4:30 pm

Joshua Robinson (CSAIL)


ABSTRACT We study generalization properties of weakly supervised learning. That is, learning where only a few true labels are present for a task of interest but many more “weak” labels are available. In particular, we show that embeddings trained using weak labels only can be fine-tuned on the downstream task of interest at the fast learning rate of O(1/n) where n denotes the number of labeled data points for the downstream task. This acceleration sheds light on the sample efficiency…

Find out more »

September 2020

Active Learning for Nonlinear System Identification with Guarantees

September 30, 2020 @ 4:00 pm - 4:30 pm

Horia Mani (LIDS)


ABSTRACT While the identification of nonlinear dynamical systems is a fundamental building block of model-based reinforcement learning and feedback control, its sample complexity is only understood for systems that either have discrete states and actions or for systems that can be identified from data generated by i.i.d. random inputs. Nonetheless, many interesting dynamical systems have continuous states and actions and can only be identified through a judicious choice of inputs. Motivated by practical settings, we study a class of nonlinear…

Find out more »

Towards Data Auctions with Externalities

September 23, 2020 @ 4:00 pm - 4:30 pm

Maryann Rui (LIDS)


ABSTRACT The design of data markets has gained in importance as firms increasingly use predictions from machine learning models to make their operations more effective, yet need to externally acquire the necessary training data to fit such models. This is particularly true in the context of the Internet where an ever-increasing amount of user data is being collected and exchanged. A property of such markets that have been given limited consideration thus far is the externality faced by a firm…

Find out more »

Solving the Phantom Inventory Problem: Near-optimal Entry-wise Anomaly Detection

September 16, 2020 @ 4:00 pm - 4:30 pm

Tianyi Peng (AeroAstro)


ABSTRACT Tianyi will discuss the work about how to achieve the optimal detection rate for detecting anomalies in a low-rank matrix. The concrete application we are studying is a crucial inventory management problem ('phantom inventory') that by some measures costs retailers approximately 4% in annual sales. We observe that this problem can be modeled as a problem of identifying anomalies in a (low-rank) Poisson matrix. State of the art approaches to anomaly detection in low-rank matrices apparently fall short. Specifically,…

Find out more »
+ Export Events

© MIT Institute for Data, Systems, and Society | 77 Massachusetts Avenue | Cambridge, MA 02139-4307 | 617-253-1764 |