Loading Events

Past Events › IDS.190 - Topics in Bayesian Modeling and Computation

Events Search and Views Navigation

Event Views Navigation

November 2019

Artificial Bayesian Monte Carlo Integration: A Practical Resolution to the Bayesian (Normalizing Constant) Paradox

November 13, 2019 @ 4:00 pm - 5:00 pm

Xiao-Li Meng (Harvard University)


Abstract: Advances in Markov chain Monte Carlo in the past 30 years have made Bayesian analysis a routine practice. However, there is virtually no practice of performing Monte Carlo integration from the Bayesian perspective; indeed,this problem has earned the “paradox” label in the context of computing normalizing constants (Wasserman, 2013). We first use the modeling-what-we-ignore idea of Kong et al. (2003) to explain that the crux of the paradox is not with the likelihood theory, which is essentially the same…

Find out more »

Probabilistic Inference and Learning with Stein’s Method

November 6, 2019 @ 4:00 pm - 5:00 pm

Lester Mackey (Microsoft Research)


IDS.190 – Topics in Bayesian Modeling and Computation **PLEASE NOTE ROOM CHANGE TO BUILDING 37-212 FOR THE WEEKS OF 10/30 AND 11/6** Speaker: Lester Mackey (Microsoft Research) Abstract: Stein’s method is a powerful tool from probability theory for bounding the distance between probability distributions.  In this talk, I’ll describe how this tool designed to prove central limit theorems can be adapted to assess and improve the quality of practical inference procedures.  I’ll highlight applications to Markov chain sampler selection, goodness-of-fit testing, variational…

Find out more »

October 2019

Using Bagged Posteriors for Robust Inference

October 30, 2019 @ 4:00 pm

Jonathan Huggins (Boston University)


IDS.190 – Topics in Bayesian Modeling and Computation **PLEASE NOTE ROOM CHANGE TO BUILDING 37-212 FOR THE WEEKS OF 10/30 AND 11/6** Speaker:   Jonathan Huggins (Boston University) Abstract: Standard Bayesian inference is known to be sensitive to misspecification between the model and the data-generating mechanism, leading to unreliable uncertainty quantification and poor predictive performance. However, finding generally applicable and computationally feasible methods for robust Bayesian inference under misspecification has proven to be a difficult challenge. An intriguing approach is…

Find out more »

Esther Williams in the Harold Holt Memorial Swimming Pool: Some Thoughts on Complexity

October 23, 2019 @ 4:00 pm - 5:00 pm

Daniel Simpson (University of Toronto)


IDS.190 – Topics in Bayesian Modeling and Computation Speaker: Daniel Simpson (University of Toronto) Abstract: Abstract: As data becomes more complex and computational modelling becomes more powerful, we rapidly find ourselves beyond the scope of traditional statistical theory. As we venture beyond the traditional thunderdome, we need to think about how to cope with this additional complexity in our model building.  In this talk, I will talk about a few techniques that are useful when specifying prior distributions and building Bayesian…

Find out more »

Markov Chain Monte Carlo Methods and Some Attempts at Parallelizing Them

October 16, 2019 @ 4:00 pm - 5:00 pm

Pierre E. Jacob (Harvard University)


IDS.190 – Topics in Bayesian Modeling and Computation Abstract: MCMC methods yield approximations that converge to quantities of interest in the limit of the number of iterations. This iterative asymptotic justification is not ideal: it stands at odds with current trends in computing hardware. Namely, it would often be computationally preferable to run many short chains in parallel, but such an approach is flawed because of the so-called “burn-in” bias.  This talk will first describe that issue and some known…

Find out more »

Probabilistic Programming and Artificial Intelligence

October 9, 2019 @ 4:00 pm - 5:00 pm

Vikash Mansinghka (MIT)


IDS.190 – Topics in Bayesian Modeling and Computation Abstract: Probabilistic programming is an emerging field at the intersection of programming languages, probability theory, and artificial intelligence. This talk will show how to use recently developed probabilistic programming languages to build systems for robust 3D computer vision, without requiring any labeled training data; for automatic modeling of complex real-world time series; and for machine-assisted analysis of experimental data that is too small and/or messy for standard approaches from machine learning and…

Find out more »

Behavior of the Gibbs Sampler in the Imbalanced Case/Bias Correction from Daily Min and Max Temperature Measurements

October 2, 2019 @ 4:00 pm - 5:00 pm

Natesh Pillai (Harvard University)


IDS.190 Topics in Bayesian Modeling and Computation *Note:  The speaker this week will give two shorter talks within the usual session Title: Behavior of the Gibbs sampler in the imbalanced case Abstract:   Many modern applications collect highly imbalanced categorical data, with some categories relatively rare. Bayesian hierarchical models combat data sparsity by borrowing information, while also quantifying uncertainty. However, posterior computation presents a fundamental barrier to routine use; a single class of algorithms does not work well in all settings and…

Find out more »

September 2019

Probabilistic Modeling meets Deep Learning using TensorFlow Probability

September 18, 2019 @ 4:00 pm - 5:00 pm

Brian Patton (Google AI)


IDS.190 - Topics in Bayesian Modeling and Computation Speaker: Brian Patton (Google AI) Abstract: TensorFlow Probability provides a toolkit to enable researchers and practitioners to integrate uncertainty with gradient-based deep learning on modern accelerators. In this talk we'll walk through some practical problems addressed using TFP; discuss the high-level interfaces, goals, and principles of the library; and touch on some recent innovations in describing probabilistic graphical models. Time-permitting, we may touch on a couple areas of research interest for the…

Find out more »

Automated Data Summarization for Scalability in Bayesian Inference

September 11, 2019 @ 4:00 pm - 5:00 pm

Tamara Broderick (MIT)


IDS.190 - Topics in Bayesian Modeling and Computation Abstract: Many algorithms take prohibitively long to run on modern, large datasets. But even in complex data sets, many data points may be at least partially redundant for some task of interest. So one might instead construct and use a weighted subset of the data (called a "coreset") that is much smaller than the original dataset. Typically running algorithms on a much smaller data set will take much less computing time, but…

Find out more »
+ Export Events

© MIT Institute for Data, Systems, and Society | 77 Massachusetts Avenue | Cambridge, MA 02139-4307 | 617-253-1764 | Design by Opus