
December 2019
Inferring the Evolutionary History of Tumors
Simon Tavaré (Columbia University)
E18-304
Abstract: Bulk sequencing of tumor DNA is a popular strategy for uncovering information about the spectrum of mutations arising in the tumor, and is often supplemented by multi-region sequencing, which provides a view of tumor heterogeneity. The statistical issues arise from the fact that bulk sequencing makes the determination of sub-clonal frequencies, and other quantities of interest, difficult. In this talk I will discuss this problem, beginning with its setting in population genetics. The data provide an estimate of the…
Find out more »November 2019
Automated Data Summarization for Scalability in Bayesian Inference
Tamara Broderick (MIT)
E18-304
Abstract: Many algorithms take prohibitively long to run on modern, large data sets. But even in complex data sets, many data points may be at least partially redundant for some task of interest. So one might instead construct and use a weighted subset of the data (called a “coreset”) that is much smaller than the original dataset. Typically running algorithms on a much smaller data set will take much less computing time, but it remains to understand whether the output…
Find out more »Understanding machine learning with statistical physics
Lenka Zdeborová (Institute of Theoretical Physics, CNRS)
E18-304
Abstract: The affinity between statistical physics and machine learning has long history, this is reflected even in the machine learning terminology that is in part adopted from physics. Current theoretical challenges and open questions about deep learning and statistical learning call for unified account of the following three ingredients: (a) the dynamics of the learning algorithm, (b) the architecture of the neural networks, and (c) the structure of the data. Most existing theories are not taking in account all of those…
Find out more »SDP Relaxation for Learning Discrete Structures: Optimal Rates, Hidden Integrality, and Semirandom Robustness
Yudong Chen (Cornell University)
E18-304
Abstract: We consider the problems of learning discrete structures from network data under statistical settings. Popular examples include various block models, Z2 synchronization and mixture models. Semidefinite programming (SDP) relaxation has emerged as a versatile and robust approach to these problems. We show that despite being a relaxation, SDP achieves the optimal Bayes error rate in terms of distance to the target solution. Moreover, SDP relaxation is provably robust under the so-called semirandom model, which frustrates many existing algorithms. Our…
Find out more »October 2019
Communicating uncertainty about facts, numbers and science
David Spiegelhalter (University of Cambridge)
32-D643
The claim of a ‘post-truth’ society, in which emotional responses trump balanced consideration of evidence, presents a strong challenge to those who value quantitative and scientific evidence: how can we communicate risks and unavoidable scientific uncertainty in a transparent and trustworthy way? Communication of quantifiable risks has been well-studied, leading to recommendations for using an expected frequency format. But deeper uncertainty about facts, numbers, or scientific hypotheses needs to be communicated without losing trust and credibility. This is an empirically…
Find out more »Accurate Simulation-Based Parametric Inference in High Dimensional Settings
Maria-Pia Victoria-Feser (University of Geneva)
E18-304
Abstract: Accurate estimation and inference in finite sample is important for decision making in many experimental and social fields, especially when the available data are complex, like when they include mixed types of measurements, they are dependent in several ways, there are missing data, outliers, etc. Indeed, the more complex the data (hence the models), the less accurate are asymptotic theory results in finite samples. This is in particular the case, for example, with logistic regression, with possibly also random effects…
Find out more »Towards Robust Statistical Learning Theory
Stanislav Minsker (University of Southern California)
E18-304
Abstract: Real-world data typically do not fit statistical models or satisfy assumptions underlying the theory exactly, hence reducing the number and strictness of these assumptions helps to lessen the gap between the “mathematical” world and the “real” world. The concept of robustness, in particular, robustness to outliers, plays the central role in understanding this gap. The goal of the talk is to introduce the principles and robust algorithms based on these principles that can be applied in the general framework of statistical…
Find out more »The Planted Matching Problem
Cristopher Moore (Santa Fe Institute)
E18-304
Abstract: What happens when an optimization problem has a good solution built into it, but which is partly obscured by randomness? Here we revisit a classic polynomial-time problem, the minimum perfect matching problem on bipartite graphs. If the edges have random weights in , Mézard and Parisi — and then Aldous, rigorously — showed that the minimum matching has expected weight zeta(2) = pi^2/6. We consider a “planted” version where a particular matching has weights drawn from an exponential distribution…
Find out more »September 2019
Frontiers of Efficient Neural-Network Learnability
Adam Klivans (University of Texas at Austin)
E18-304
Abstract: What are the most expressive classes of neural networks that can be learned, provably, in polynomial-time in a distribution-free setting? In this talk we give the first efficient algorithm for learning neural networks with two nonlinear layers using tools for solving isotonic regression, a nonconvex (but tractable) optimization problem. If we further assume the distribution is symmetric, we obtain the first efficient algorithm for recovering the parameters of a one-layer convolutional network. These results implicitly make use of a…
Find out more »Some New Insights On Transfer Learning
Samory Kpotufe (Columbia University)
E18-304
Abstract: The problem of transfer and domain adaptation is ubiquitous in machine learning and concerns situations where predictive technologies, trained on a given source dataset, have to be transferred to a new target domain that is somewhat related. For example, transferring voice recognition trained on American English accents to apply to Scottish accents, with minimal retraining. A first challenge is to understand how to properly model the ‘distance’ between source and target domains, viewed as probability distributions over a feature…
Find out more »