## November 2019

## LIDS Seminar – Rayadurgam Srikant (University of Illinois at Urbana-Champaign)

Rayadurgam Srikant (University of Illinois at Urbana-Champaign)

32-155

TBD Bio: ____________________________________ The LIDS Seminar Series features distinguished speakers who provide an overview of a research area, as well as exciting recent progress in that area. Intended for a broad audience, seminar topics span the areas of communications, computation, control, learning, networks, probability and statistics, optimization, and signal processing.

Find out more »## LIDS Seminar – Sujay Sanghavi (University of Texas at Austin)

Sujay Sanghavi (University of Texas at Austin)

32-155

TBD Bio: ____________________________________ The LIDS Seminar Series features distinguished speakers who provide an overview of a research area, as well as exciting recent progress in that area. Intended for a broad audience, seminar topics span the areas of communications, computation, control, learning, networks, probability and statistics, optimization, and signal processing.

Find out more »## October 2019

## The Age of Information in Networks: Moments, Distributions, and Sampling

Roy Yates (Rutgers University)

32-155

We examine a source providing status updates to monitors through a network with state defined by a continuous-time finite Markov chain. Using an age of information (AoI) metric, we characterize timeliness by the vector of ages tracked by the monitors. Based on a stochastic hybrid systems (SHS) approach, we derive first-order linear differential equations for the temporal evolution of both the age moments and a moment generating function (MGF) of the age vector components. We show that the existence of…

Find out more »## LIDS Seminar – George Pappas (University of Pennsylvania)

George Pappas (University of Pennsylvania)

32-155

TBD Bio: ____________________________________ The LIDS Seminar Series features distinguished speakers who provide an overview of a research area, as well as exciting recent progress in that area. Intended for a broad audience, seminar topics span the areas of communications, computation, control, learning, networks, probability and statistics, optimization, and signal processing.

Find out more »## Data-driven Coordination of Distributed Energy Resources

Alejandro Dominguez-Garcia (University of Illinois at Urbana-Champaign)

32-155

The integration of distributed energy resources (DERs), e.g., rooftop photovoltaics installations, electric energy storage devices, and flexible loads, is becoming prevalent. This integration poses numerous operational challenges on the lower-voltage systems to which the DERs are connected, but also creates new opportunities for the provision of grid services. In the first part of the talk, we discuss one such operational challenge—ensuring proper voltage regulation in the distribution network to which DERs are connected. To address this problem, we propose a…

Find out more »## September 2019

## Power of Experimental Design and Active Learning

Aarti Singh (Carnegie Mellon University)

E18-304

Classical supervised machine learning algorithms focus on the setting where the algorithm has access to a fixed labeled dataset obtained prior to any analysis. In most applications, however, we have control over the data collection process such as which image labels to obtain, which drug-gene interactions to record, which network routes to probe, which movies to rate, etc. Furthermore, most applications face budget limitations on the amount of labels that can be collected. Experimental design and active learning are two…

Find out more »## Dynamic Monitoring and Decision Systems (DyMonDS) Framework for Data-Enabled Integration in Complex Electric Energy Systems

Marija Ilic (MIT)

32-155

In this talk, we introduce a unifying Dynamic Monitoring and Decision Systems (DyMonDS) framework that is based on multi-layered modeling for aggregation and minimal coordination of interactions between the layers of complex electric energy systems. Using this approach, distributed control and optimization problems are formulated so that: (1) the low-level decision-makers optimize cost of local interactions while accounting for their heterogeneous technologies, as well as for their social and risk preferences; and, (2) the higher layer aggregators and coordinators optimize…

Find out more »## May 2019

## Learning Engines for Healthcare: Using Machine Learning to Transform Clinical Practice and Discovery

Mihaela van der Schaar (University of California, Los Angeles)

32-155

The overarching goal of my research is to develop cutting-edge machine learning, AI and operations research theory, methods, algorithms, and systems to understand the basis of health and disease; develop methodology to catalyze clinical research; support clinical decisions through individualized medicine; inform clinical pathways, better utilize resources & reduce costs; and inform public health. To do this, Prof. van der Schaar is creating what she calls Learning Engines for Healthcare (LEH’s). An LEH is an integrated ecosystem that uses machine learning, AI…

Find out more »## April 2019

## On Coupling Methods for Nonlinear Filtering and Smoothing

Youssef Marzouk (MIT)

32-155

Bayesian inference for non-Gaussian state-space models is a ubiquitous problem with applications ranging from geophysical data assimilation to mathematical finance. We will discuss how deterministic couplings between probability distributions enable new solutions to this problem. We first consider filtering in high-dimensional models with nonlinear (potentially chaotic) dynamics and sparse observations in space and time. While the ensemble Kalman filter (EnKF) yields robust ensemble approximations of the filtering distribution in this setting, it is limited by linear forecast-to-analysis transformations. To generalize…

Find out more »## Memory-Efficient Adaptive Optimization for Humungous-Scale Learning

Yoram Singer (Google)

32-G449 (KIva/Patel)

Adaptive gradient-based optimizers such as AdaGrad and Adam are among the methods of choice in modern machine learning. These methods maintain second-order statistics of each model parameter, thus doubling the memory footprint of the optimizer. In behemoth-size applications, this memory overhead restricts the size of the model being used as well as the number of examples in a mini-batch. We describe a novel, simple, and flexible adaptive optimization method with sublinear memory cost that retains the benefits of per-parameter adaptivity…

Find out more »