Views Navigation

Event Views Navigation

Calendar of Events

S Sun

M Mon

T Tue

W Wed

T Thu

F Fri

S Sat

0 events,

0 events,

0 events,

1 event,

LIDS & Stats Tea Talks Horia Mani

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

LIDS & Stats Tea Talks Jingzhao Zhang

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

LIDS & Stats Tea Talks Jason Altschuler

0 events,

0 events,

0 events,

0 events,

0 events,

0 events,

1 event,

LIDS & Stats Tea Talks Sarah Cen

0 events,

0 events,

0 events,

Active Learning for Nonlinear System Identification with Guarantees

Horia Mani (LIDS)
Online

ABSTRACT While the identification of nonlinear dynamical systems is a fundamental building block of model-based reinforcement learning and feedback control, its sample complexity is only understood for systems that either have discrete states and actions or for systems that can be identified from data generated by i.i.d. random inputs. Nonetheless, many interesting dynamical systems have…

Find out more »

Fast Learning Guarantees for Weakly Supervised Learning

Joshua Robinson (CSAIL)
Online

ABSTRACT We study generalization properties of weakly supervised learning. That is, learning where only a few true labels are present for a task of interest but many more “weak” labels are available. In particular, we show that embeddings trained using weak labels only can be fine-tuned on the downstream task of interest at the fast…

Find out more »

Provably Faster Convergence of Adaptive Gradient Methods

Jingzhao Zhang (LIDS)
Online

ABSTRACT While stochastic gradient descent (SGD) is still the de facto algorithm in deep learning, adaptive methods like Adam have been observed to outperform SGD across important tasks, such as NLP models. The settings under which SGD performs poorly in comparison to adaptive methods are not well understood yet. Instead, recent theoretical progress shows that…

Find out more »

Kernel Approximation Over Algebraic Varieties

Jason Altschuler (LIDS)
Online

ABSTRACT Low-rank approximation of the Gaussian kernel is a core component of many data-science algorithms. Often the approximation is desired over an algebraic variety (e.g., in problems involving sparse data or low-rank data). Can better approximations be obtained in this setting? In this talk, I’ll show that the answer is yes: The improvement is exponential…

Find out more »

Regulating Algorithmic Filtering on Social Media

Sarah Cen (LIDS)
Online

ABSTRACT Social media platforms moderate content using a process known as algorithmic filtering (AF). While AF has the potential to greatly improve the user experience, it has also drawn intense scrutiny for its roles in, for example, spreading fake news, amplifying hate speech, and facilitating digital red-lining. However, regulating AF can be harmful to the…

Find out more »


MIT Institute for Data, Systems, and Society
Massachusetts Institute of Technology
77 Massachusetts Avenue
Cambridge, MA 02139-4307
617-253-1764