## October 2019

## LIDS Seminar – George Pappas (University of Pennsylvania)

George Pappas (University of Pennsylvania)

32-155

TBD Bio: ____________________________________ The LIDS Seminar Series features distinguished speakers who provide an overview of a research area, as well as exciting recent progress in that area. Intended for a broad audience, seminar topics span the areas of communications, computation, control, learning, networks, probability and statistics, optimization, and signal processing.

Find out more »## Data-driven Coordination of Distributed Energy Resources

Alejandro Dominguez-Garcia (University of Illinois at Urbana-Champaign)

32-155

The integration of distributed energy resources (DERs), e.g., rooftop photovoltaics installations, electric energy storage devices, and flexible loads, is becoming prevalent. This integration poses numerous operational challenges on the lower-voltage systems to which the DERs are connected, but also creates new opportunities for the provision of grid services. In the first part of the talk, we discuss one such operational challenge—ensuring proper voltage regulation in the distribution network to which DERs are connected. To address this problem, we propose a…

Find out more »## September 2019

## Power of Experimental Design and Active Learning

Aarti Singh (Carnegie Mellon University)

E18-304

Classical supervised machine learning algorithms focus on the setting where the algorithm has access to a fixed labeled dataset obtained prior to any analysis. In most applications, however, we have control over the data collection process such as which image labels to obtain, which drug-gene interactions to record, which network routes to probe, which movies to rate, etc. Furthermore, most applications face budget limitations on the amount of labels that can be collected. Experimental design and active learning are two…

Find out more »## Dynamic Monitoring and Decision Systems (DyMonDS) Framework for Data-Enabled Integration in Complex Electric Energy Systems

Marija Ilic (MIT)

32-155

In this talk, we introduce a unifying Dynamic Monitoring and Decision Systems (DyMonDS) framework that is based on multi-layered modeling for aggregation and minimal coordination of interactions between the layers of complex electric energy systems. Using this approach, distributed control and optimization problems are formulated so that: (1) the low-level decision-makers optimize cost of local interactions while accounting for their heterogeneous technologies, as well as for their social and risk preferences; and, (2) the higher layer aggregators and coordinators optimize…

Find out more »## May 2019

## Learning Engines for Healthcare: Using Machine Learning to Transform Clinical Practice and Discovery

Mihaela van der Schaar (University of California, Los Angeles)

32-155

The overarching goal of my research is to develop cutting-edge machine learning, AI and operations research theory, methods, algorithms, and systems to understand the basis of health and disease; develop methodology to catalyze clinical research; support clinical decisions through individualized medicine; inform clinical pathways, better utilize resources & reduce costs; and inform public health. To do this, Prof. van der Schaar is creating what she calls Learning Engines for Healthcare (LEH’s). An LEH is an integrated ecosystem that uses machine learning, AI…

Find out more »## April 2019

## On Coupling Methods for Nonlinear Filtering and Smoothing

Youssef Marzouk (MIT)

32-155

Bayesian inference for non-Gaussian state-space models is a ubiquitous problem with applications ranging from geophysical data assimilation to mathematical finance. We will discuss how deterministic couplings between probability distributions enable new solutions to this problem. We first consider filtering in high-dimensional models with nonlinear (potentially chaotic) dynamics and sparse observations in space and time. While the ensemble Kalman filter (EnKF) yields robust ensemble approximations of the filtering distribution in this setting, it is limited by linear forecast-to-analysis transformations. To generalize…

Find out more »## Memory-Efficient Adaptive Optimization for Humungous-Scale Learning

Yoram Singer (Google)

32-G449 (KIva/Patel)

Adaptive gradient-based optimizers such as AdaGrad and Adam are among the methods of choice in modern machine learning. These methods maintain second-order statistics of each model parameter, thus doubling the memory footprint of the optimizer. In behemoth-size applications, this memory overhead restricts the size of the model being used as well as the number of examples in a mini-batch. We describe a novel, simple, and flexible adaptive optimization method with sublinear memory cost that retains the benefits of per-parameter adaptivity…

Find out more »## Personalized Dynamic Pricing with Machine Learning: High Dimensional Covariates and Heterogeneous Elasticity

Gah-Yi Ban (London Business School)

32-155

We consider a seller who can dynamically adjust the price of a product at the individual customer level, by utilizing information about customers’ characteristics encoded as a $d$-dimensional feature vector. We assume a personalized demand model, parameters of which depend on $s$ out of the $d$ features. The seller initially does not know the relationship between the customer features and the product demand, but learns this through sales observations over a selling horizon of $T$ periods. We prove that the…

Find out more »## March 2019

## Automatic Computation of Exact Worst-Case Performance for First-Order Methods

Julien Hendrickx (UCLouvain)

32-155

Joint work with Adrien Taylor (INRIA) and Francois Glineur (UCLouvain). We show that the exact worst-case performances of a wide class of first-order convex optimization algorithms can be obtained as solutions to semi-definite programs, which provide both the performance bounds and functions on which these are reached. Our formulation is based on a necessary and sufficient condition for smooth (strongly) convex interpolation, allowing for a finite representation for smooth (strongly) convex functions in this context. These results allow improving the…

Find out more »## February 2019

## Coded Computing: A Transformative Framework for Resilient, Secure, and Private Distributed Learning

Salman Avestimehr (University of Southern California)

32-155

This talk introduces "Coded Computing”, a new framework that brings concepts and tools from information theory and coding into distributed computing to mitigate several performance bottlenecks that arise in large-scale distributed computing and machine learning, such as resiliency to stragglers and bandwidth bottleneck. Furthermore, coded computing can enable (information-theoretically) secure and private learning over untrusted workers that is gaining increasing importance in various application domains. In particular, we present CodedPrivateML for distributed learning, which keeps both the data and the…

Find out more »