2021 – 2022 Acad. Year

Thursday, 06/02/2022, Time: 11:00am – 12:15pm PSTThe Geometry of Memoryless Stochastic Policy Optimization in Infinite-Horizon POMDPs

Guido Montufar, Assistant Professor
Departments of Mathematics and Statistics, UCLA

Location: Franz 2258A

Abstract:

We consider the problem of finding the best memoryless stochastic policy for an infinite-horizon partially observable Markov decision process (POMDP) with finite state and action spaces with respect to either the discounted or mean reward criterion. We show that the (discounted) state-action frequencies and the expected cumulative reward are rational functions of the policy, whereby the degree is determined by the degree of partial observability. We then describe the optimization problem as a linear optimization problem in the space of feasible state-action frequencies subject to polynomial constraints that we characterize explicitly. This allows us to address the combinatorial and geometric complexity of the optimization problem using recent tools from polynomial optimization. In particular, we demonstrate how the partial observability constraints can lead to multiple smooth and non-smooth local optimizers and we estimate the number of critical points. This is work with Johannes Müller.

Bio:

Guido Montúfar is an Assistant Professor at the Department of Mathematics and the Department of Statistics at UCLA. He studied mathematics and theoretical physics at the TU Berlin and completed the PhD at the Max Planck Institute for Mathematics in the Sciences. Guido is interested in mathematical machine learning, especially the interplay of capacity, optimization, and generalization in deep learning. Since 2018 he is the PI of the ERC starting grant project Deep Learning Theory. His research interfaces with information geometry, optimal transport, and algebraic statistics.

Thursday, 05/26/2022, Time: 11:00am – 12:15pm PSTMixed Convex Exponential Families and Locally Associated Graphical Models

Piotr Zwiernik, Associate Professor
Department of Statistical Sciences, University of Toronto
https://www.statistics.utoronto.ca/people/directories/all-faculty/piotr-zwiernik

Location: Franz 2258A

Abstract:

In exponential families the log-likelihood forms a concave function in the canonical parameters. Therefore, any model given by convex constraints in these canonical parameters admits a unique maximum likelihood estimator (MLE). Such models are called convex exponential families. For models that are convex in the mean parameters (e.g. Gaussian covariance graph models) the maximum likelihood estimation is much more complicated and the likelihood function typically has many local optima. One solution is to replace the MLE with a so-called dual likelihood estimator, which is uniquely defined and asymptotically has the same distribution as the MLE. In this talk I will consider a much more general setting, where the model is given by convex constraints on some canonical parameters and convex constraints on the remaining mean parameters. We call such models mixed convex exponential families. We propose for these models a 2-step optimization procedure which relies on solving two convex problems. We show that this new estimator has asymptotically the same distribution as the MLE. Our work was motivated by locally associated Gaussian graphical models that form a suitable relaxation of Gaussian totally positive distributions.

This is joint work with Steffen Lauritzen.

Bio:

Piotr is an Associate Professor in the Department of Statistical Sciences at the University of Toronto. He did his PhD in Statistics in 2011 at University of Warwick. He held research positions at Mittag Leffler Institute in Stockholm, IPAM in Los Angeles, TU Eindhoven, UC Berkeley, and the University of Genoa. In 2016-2021 he was an Assistant Professor at Universitat Pompeu Fabra in Barcelona. His research interests focus on covariance matrix estimation, graphical models, high dimensional statistics, and elegant mathematical statistics. He is an associate editor of Biometrika, Scandinavian Journal of Statistics, and Algebraic Statistics.

Thursday, 05/05/2022, Time: 11:00am – 12:15pm PSTRecent Works on Trustworthy AI and Synthetic Data

Guang Cheng, Professor
UCLA Department of Statistics
http://www.stat.ucla.edu/~guangcheng/

Location: Franz 2258A

Abstract:

This talk serves as a high-level introduction to the Trustworthy AI Lab at UCLA. The targeted audiences are graduate students in Statistics, Mathematics and Machine Learning at UCLA.

Our lab believes that the next generation of AI is driven by trustworthiness (beyond performance) and built upon synthetic data (on top of real data). Hence, this talk covers the two related topics: trustworthy AI and synthetic data generation.

In the first topic, we propose to protect privacy by machine un-learning, and develop theory-inspired and user-friendly fair classification algorithms. In the second topic, we develop (perhaps the first) statistical learning framework to analyze synthetic data, and further use recommender systems as an example to illustrate how synthetic data can preserve privacy without sacrificing recommendation accuracy (i.e., utility of downstream tasks).

Bio:

Guang Cheng is a Professor of Statistics at UCLA. He received his BA in Economics from Tsinghua University in 2002, and PhD in Statistics from University of Wisconsin-Madison in 2006. His research interests include trustworthy AI, statistical machine learning and deep learning theory. Cheng is an Institute of Mathematical Statistics Fellow, and was a member in the Institute for Advanced Study, Princeton. Additionally, he is the recipient of the NSF CAREER award, Adobe Faculty Award, and Simons Fellow in Mathematics. Please visit his trustworthy AI Lab at http://www.stat.ucla.edu/~guangcheng/.

Thursday, 04/28/2022, Time: 11:00am – 12:15pm PSTShepp p-product

Lek-Heng Lim, Professor
Department of Statistics, University of Chicago
https://stat.uchicago.edu/people/profile/lek-heng-lim/

Location: online

Abstract:

In 1962, Shepp famously discovered a product of normal random variables that preserves normality. The Shepp product, which takes the form XY/(X^2 + Y^2)^1/2, has since been thoroughly studied and has found numerous connections to other areas of statistics. Among other things, it has an extension to n normal variables, gives a multiplicative analogue of central limit theorem, and applies unexpectedly to genomics as a test statistics for alignment-free sequence analysis. The Shepp product is evidently the p = 2 special case of XY/(X^p + Y^p)^1/p that we call the Shepp p-product. We will show that the Shepp p-product, particularly when p = 1 and ∞ (the latter in a limiting sense), is no less fascinating and applicable than the original p = 2 case. Just as the Shepp 2-product preserves normal distributions, the Shepp 1-product preserves Cauchy distributions while the Shepp ∞-product preserves exponential distributions. In fact, the converse is also true in an appropriate sense, allowing us to characterize the Cauchy, normal, and exponential distributions as the unique distributions preserved by the Shepp p-product for p = 1, 2, ∞ respectively. We will study the multiplicative analogue of infinite divisibility with respect to the Shepp p-product, establish an asymptotic theory for the Shepp p-product of n i.i.d. random variables, and estimate the rates of convergence in Kolmogorov distance. Alongside our study of convergence rates, we define the domain of normal attraction of extremal distributions and establish a new rate of uniform convergence to Frechet distribution and reverse Weibull distribution. Some of our results are new even for the p = 2 case. We will also discuss new applications of the Shepp p-product in statistics, computational biology, and statistical physics. This is joint with Wenxuan Guo.

Bio:

L.-H. Lim is a faculty in the Computational and Applied Mathematics Initiative at the University of Chicago. His current research is about realizing algebraic varieties and differential manifolds as embedded or
quotient spaces of matrices and using numerical linear algebra to compute various quantities of practical interest. He serves on the editorial boards of Forum of Mathematics Pi/Sigma, Linear Algebra and its Applications, Linear and Multilinear Algebra, and Numerical Algorithms. His work has been supported by the AFOSR, DARPA, and NSF.


Thursday, 04/21/2022, Time: 11:00am – 12:15pm PSTTowards practical estimation of Brenier maps

Jonathan Niles-Weed, Assistant Professor of Mathematics and Data Science
Courant Institute and NYU Center for Data Science
https://www.jonathannilesweed.com

Location: Franz 2258A

Abstract:

Given two probability distributions in R^d, a transport map is a function which maps samples from one distribution into samples from the other. For absolutely continuous measures, Brenier proved a remarkable theorem identifying a unique canonical transport map, which is “monotone” in a suitable sense. We study the question of whether this map can be efficiently estimated from samples. The minimax rates for this problem were recently established by Hutter and Rigollet (2021), but the estimator they propose is computationally infeasible in dimensions greater than three. We propose two new estimators—one minimax optimal, one not—which are significantly more practical to compute and implement. The analysis of these estimators is based on new stability results for the optimal transport problem and its regularized variants. Based on joint work with Manole, Balakrishnan, and Wasserman and with Pooladian.

Bio:

Jonathan Niles-Weed is an Assistant Professor of Mathematics and Data Science at the Courant Institute of Mathematical Sciences and the Center for Data Science at NYU, where he is a core member of the Math and Data and STAT groups. Jonathan studies statistics, probability, and the mathematics of data science, with a focus on statistical and computational problems arising from data with geometric structure. Much of his recent work is dedicated to developing a statistical theory of optimal transport. He received his Ph.D. in Mathematics and Statistics from MIT, under the supervision of Philippe Rigollet. His research is supported in part by the National Science Foundation, Google Research, and an Alfred P. Sloan Foundation fellowship.


Thursday, 04/14/2022, Time: 11:00am – 12:15pm PSTUnderstanding Self-supervised Learning: A Graph Decomposition Perspective

Tengyu Ma, Assistant Professor of Computer Science and Statistics
Stanford University
http://ai.stanford.edu/~tengyuma/

Location: Franz 2258A

Abstract:

Self-supervised learning has made empirical breakthroughs in producing representations that can be applied to a wide range of downstream tasks. In this talk, I will primarily present a recent work that analyzes contrastive learning algorithms under realistic assumptions on the data distributions for vision applications. We prove that contrastive learning can be viewed as a parametric version of spectral clustering on a so-called population augmentation graph, and analyze the linear separability of the learned representations and provide sample complexity bounds. I will also briefly discuss two follow-up works that study self-supervised representations’ performance under imbalanced training datasets and for shifting test distributions.

Bio:

Tengyu Ma is an assistant professor of Computer Science and Statistics at Stanford University. He received his Ph.D. from Princeton University and B.E. from Tsinghua University. His research interests include topics in machine learning and algorithms, such as deep learning and its theory, non-convex optimization, deep reinforcement learning, representation learning, and high-dimensional statistics. He is a recipient of the NIPS’16 best student paper award, COLT’18 best paper award, ACM Doctoral Dissertation Award Honorable Mention, the Sloan Fellowship, and NSF CAREER Award.


Thursday, 04/07/2022, Time: 11:00am – 12:15pm PSTStatistical Thinking in the Machine Learning Age

Ka Wong, AI Researcher at Google
https://research.google/people/KaWong/

Location: Franz 2258A

Abstract:

A lot of recent progress in machine learning (ML) is made available by the large amounts of data collected via Crowdsourcing (e.g. MTurk). On the one hand, Crowdsourcing provides a scalable and economic solution to the insatiable data demands of deep learning. On the other hand, data collected from crowd workers often has questionable quality. The current way of assessing the quality of this data is to measure its accuracy. That is, we look at a picture and a label and decide if it is a good label or not. This makes a critical assumption that each item (picture, video, text, etc) has an objectively true label. However, data, as we know, is full of subtleties and ambiguity, and does not lend itself to this type of absolute analysis. I propose a new way of characterizing this class of data using statistical languages that embrace uncertainty, such as variance and inter-rater reliability. I will discuss the implications of this in the context of ML evaluation.

Bio:

Ka Wong received his B.A. from UC Berkeley in Architecture. He studied under Prof. Rick Schoenberg at UCLA and received his PhD in Statistics in 2009. He joined Google after graduation and worked on the evaluation of Google Search Engine. During that time, he had a front row seat witnessing how Crowdsourced data powers the decision-making in the industry. He joined Google Research in 2016 to focus on Crowdsourcing research. Particularly, he is interested in developing a methodology for characterizing this class of data via the statistical lens. He has since applied this methodology to the evaluation of ML systems in different domains, including Question-Answering, Hate Speech Detection, and Object Recognition. His two most recent publications are published in NLP venues (link 1, link 2).

Thursday, 03/31/2022, Time: 11:00am – 12:15pm PSTRecent Developments in Nonparametric Quantile Regression

Oscar Madrid Padilla, Assistant Professor
UCLA Department of Statistics
https://hernanmp.github.io/

Location: Franz 2258A

Abstract:

In this talk I will focus on some recent developments in nonparametric quantile regression. I will start by providing some results for the quantile sequence model under a convex constraint. I will then explain how this machinery can be used to study quantile trend filtering. The talk will then present some ideas of how to tackle problems with non-convex constraints, this will be illustrated with the quantile version of Dyadic CART.

Bio:

Oscar Madrid is a Tenure-track Assistant Professor in the Department of Statistics at University of California, Los Angeles. Previously, from July 2017 to June 2019, he was Neyman Visiting Assistant Professor in the Department of Statistics at University of California, Berkeley. Before that, he earned a Ph.D. in statistics at The University of Texas at Austin in May 2017 under the supervision of Prof. James Scott.


Thursday, 03/10/2022, Time: 11:00am – 12:00pm PSTDeLeeuw Seminar: Robustness in an uncertain world? Challenges for AI and Data Science in Economic and Social Science Applications

Frauke Kreuter, Professor
University of Maryland and LMU Munich

Location: Luskin Conference Center, Optimist Room

Abstract:

Artificial intelligence (AI) and Big Data offer enormous potential to explore and solve complex societal challenges. In the labor market context, for instance, AI is used to optimize bureaucratic procedures and minimize potential errors in human decisions. AI is also applied to identify patterns in digital data trails. Data trails are created when people use smartphones and IoT devices or browse the internet, for example. Unfortunately, the fact that all of this is dependent on social and economic contexts is often ignored when AI is used, and the importance of high-quality data is frequently overlooked. There is growing concern about the lack of fairness—an essential criterion for making good use of AI. Fairness in this context means the adequate consideration of different social groups in the data base and in pattern recognition.

This lecture outlines the latest developments in the use of AI and Big Data in economic and social research. Frauke Kreuter explains the pitfalls around their application, demonstrate dependencies of the results on human decisions in the data science process, and introduce a universal adaptability as one method to overcome shifts in target populations. Time permitting, we will discuss ethics and privacy implications.

Bio:

Professor Frauke Kreuter is Chair of Statistics and Data Science in Social Sciences and the Humanities at LMU Munich and Co-Director of the Social Data Science Center at the University of Maryland where she holds a professor position in Survey Methodology. She is an elected fellow of the American Statistical Association and has been rewarded the Warren Mitofsky Innovators Award of the American Association for Public Opinion Research in 2020. Kreuter is also the Founder of the International Program for Survey and Data Science and Co-founder of the Coleridge Initiative. Her research interests are on quality of data, sampling strategies and selection bias, measurement error and most recently the consequences of such errors in the use of artificial intelligence.


Thursday, 03/03/2022, Time: 11:00am – 12:00pm PSTData Splitting

Roshan Joseph, Professor
Industrial and Systems Engineering, Georgia Institute of Technology

Location: Young Hall CS50

Abstract:

For developing statistical and machine learning models, it is common to split the dataset into two parts: training and testing. The training part is used for fitting the model and the testing part for evaluating the performance of the fitted model. The most common strategy for splitting is to randomly sample a fraction of the dataset. In this talk, I will discuss an optimal method for doing this. I will also discuss about the optimal ratio for splitting. The talk is based on the following three papers:
https://www.tandfonline.com/doi/full/10.1080/00401706.2021.1921037
https://onlinelibrary.wiley.com/doi/full/10.1002/sam.11574
https://arxiv.org/abs/2202.03326

Bio:

Dr. Roshan Joseph is an A. Russell Chandler III Chair and Professor in the Stewart School of Industrial and Systems Engineering at Georgia Tech, Atlanta. He holds a Ph.D. degree in Statistics from the University of Michigan, Ann Arbor. His research focuses on computational and applied statistics with applications to engineering. He is a recipient of CAREER Award from NSF in 2005, Jack Youden Prize from the ASQ in 2005, Best Paper Award from IIE Transactions in 2009, Edelman Laureate from INFORMS in 2017, SPES Award from the ASA in 2019, SPAIG Award from the ASA in 2020, and Lloyd S. Nelson Award from ASQ in 2021. He is a Fellow of the ASA and ASQ. He is currently serving as the Editor-in-Chief of Technometrics.

Thursday, 02/24/2022, Time: 11:00am – 12:00pm PSTFunctional-Input Gaussian Processes with Applications to Inverse Scattering Problems

Ying Hung, Professor
Department of Statistics, Rutgers University

Location: Young Hall CS50

Abstract:

Surrogate modeling based on Gaussian processes (GP) has received increasing attention in the analysis of complex problems in science and engineering. Despite extensive studies on GP modeling, the developments for functional inputs are scarce. Motivated by an inverse scattering problem, a new class of kernel functions is introduced for GP with functional inputs. The asymptotic convergence properties of the proposed GP models are derived. In the application to inverse scattering problem, the functional input which is associated with the support of the scattering region of interest is identified, given a measured far-field pattern.

Bio:

Dr. Hung is a Professor in Department of Statistics at Rutgers. Dr. Hung graduated from Georgia Institute of Technology in 2008. She received NSF CAREER award and IMS Tweedie Award in 2014. Her research areas include experimental design, modeling for computer experiments, and uncertainty quantification.

Thursday, 02/17/2022, Time: 11:00am – 12:00pm PSTInference for the Best Sequence in Order-of-Addition

Robert Mee, Professor of Business Analytics
Haslam College of Business, University of Tennessee

Location: Young Hall CS50

Abstract:

Often the primary objective of order-of-addition experiments is to identify the sequence with the best mean response. Thus, we provide a multiple comparison procedure for identifying all sequences that are not significantly inferior to the best. Simulation is used to determine the multiple-comparison-with-the-best critical values. While the methods apply to any parametric model, certain cases require only a single critical value. We tabulate the required critical values for several popular order-of-addition models, when estimation is based on an optimal design. We use examples from the literature to illustrate how model choice impacts the set of sequences that are determined to be not significantly less than the best. The open question of a similar inference for order-of-addition kriging models will be raised.

Bio:

Dr. Robert Mee is the William and Sara Clark Professor of Business Analytics in the Haslam College of Business at the University of Tennessee. Dr. Mee received his B.S. In Management Science from Georgia Institute of Technology, and his M.S. and Ph.D. in Statistics from Iowa State University. Dr. Mee is an elected fellow of the American Statistical Association and has authored 60 refereed journal articles. He served on Technometrics’ Management Committee for 12 years and as an Associate Editor for 7 years. Currently he serves on the Journal of Quality Technology’s Editorial Board. Dr. Mee’s research interests include design and analysis of experiments and choice-based conjoint analysis. He is the author of A Comprehensive Guide to Factorial Two-Level Experimentation, a monograph published by Springer.

Thursday, 02/10/2022, Time: 11:00am – 12:00pm PSTClipper: A general statistical framework for p-value-free FDR control in large-scale feature screening

Xinzhou Ge, Postdoctoral Fellow
Department of Statistics, UCLA

Location: Young Hall CS50

Abstract:

Large-scale feature screening is ubiquitous in high-throughput biological data analysis: identifying the features (e.g., genes, mRNA transcripts, and proteins) that differ between conditions from numerous features measured simultaneously. The false discovery rate (FDR) is the most widely-used criterion to ensure the reliability of screened features. The most famous Benjamini-Hochberg procedure for FDR control requires valid high-resolution p-values, which are, however, often hardly achievable because of the reliance on reasonable distributional assumptions or large sample sizes. Motivated by the Barber-Candes procedure, Clipper is a general statistical framework for large-scale feature screening with theoretical FDR control and without p-value requirement. Extensive numerical studies have verified that Clipper is a versatile and effective tool for correcting the FDR inflation crisis in multiple bioinformatics applications.

Bio:

Xinzhou obtained his Ph.D. degree in 2021 from Department of Statistics at UCLA, where he worked with Prof. Jingyi Jessica Li. He received his Bachelor of Statistics from School of Mathematical Science, Peking University in 2016. After graduating from UCLA, Xinzhou continued working with Prof. Jingyi Jessica Li as a postdoc.

Thursday, 01/27/2022, Time: 11:00am – 12:00pm PSTStatistical Learning and Matching Markets

Xiaowu Dai, Postdoctoral Fellow
EECS and Economics, UC Berkeley

Location: Public Affairs Building 1234

Abstract:

We study the problem of decision-making in the setting of a scarcity of shared resources when the preferences of agents are unknown a priori and must be learned from data. Taking the two-sided matching market as a running example, we focus on the decentralized setting, where agents do not share their learned preferences with a central authority. Our approach is based on the representation of preferences in a reproducing kernel Hilbert space, and a learning algorithm for preferences that accounts for uncertainty due to the competition among the agents in the market. Under regularity conditions, we show that our estimator of preferences converges at a minimax optimal rate. Given this result, we derive optimal strategies that maximize agents’ expected payoffs and we calibrate the uncertain state by taking opportunity costs into account. We also derive an incentive-compatibility property and show that the outcome from the learned strategies has a stability property. Finally, we prove a fairness property that asserts that there exists no justified envy according to the learned strategies. This is a joint work with Michael I. Jordan.

Bio:

Xiaowu Dai is a postdoc in EECS and Economics at UC Berkeley, working with Michael I. Jordan and Lexin Li. He obtained his Ph.D. in Statistics from the University of Wisconsin-Madison, advised by Grace Wahba. He is interested in developing statistical theory and methodology for real-world problems that blend computational, inferential, and economic considerations.

Tuesday, 01/25/2022, Time: 11:00am – 12:00pm PSTConfidence Intervals for Nonparametric Empirical Bayes Analysis and an Application to Regression Discontinuity Designs

Nikolaos Ignatiadis, Final Year Ph.D. Candidate
Stanford University

Location: Public Affairs Building 1234

Abstract:

In an empirical Bayes analysis, we use data from repeated sampling to imitate inferences made by an oracle Bayesian with extensive knowledge of the data-generating distribution. Existing results provide a comprehensive characterization of when and why empirical Bayes point estimates accurately recover oracle Bayes behavior. In this work, we construct flexible and practical nonparametric confidence intervals that provide asymptotic frequentist coverage of empirical Bayes estimands, such as the posterior mean and the local false sign rate. From a methodological perspective we build upon results on affine minimax estimation, and our coverage statements hold even when estimands are only partially identified or when empirical Bayes point estimates converge very slowly. We then demonstrate how the empirical Bayes model, along with a natural exogeneity assumption, also enables estimation and inference of causal effects in the regression discontinuity design, in which treatment is determined by whether an observed running variable crosses a pre-specified threshold. Our inference is driven solely by noise-induced randomization in the running variable of the regression discontinuity design.

Bio:

Nikolaos Ignatiadis is a final-year Ph.D. student in the Department of Statistics at Stanford University, advised by Prof. Stefan Wager. His research interests include empirical Bayes methods, causal inference, multiple testing, and statistical analysis in the presence of contextual side information. Before coming to Stanford, Nikolaos received degrees in Mathematics (B.Sc.), Molecular Biotechnology (B.Sc.), and Scientific Computing (M.Sc.) at the University of Heidelberg in Germany, where he worked with Dr. Wolfgang Huber at the European Molecular Biology Laboratory.

Thursday, 01/20/2022, Time: 11:00am – 12:00pm PSTLearning and Using Causality: A Further Step Towards Machine Intelligence

Biwei Huang, Final Year Ph.D. Candidate
Carnegie Mellon University

Location: Public Affairs Building 1234

Abstract:

Understanding causal relationships is a fundamental problem in scientific research. Recently, causal analysis has also attracted much interest in statistics and computer science. One focus of this talk is on causal discovery–it aims to identify causal structure and quantitative models of a large set of variables from observational (non-experimental) data, serving as a practical alternative to interventions and randomized experiments. Specifically, I will introduce recent methodological developments of causal discovery in complex environments with distribution shifts and unobserved confounders, together with successful applications. Besides learning causality, another problem of interest is how causality can help understand and advance machine learning and artificial intelligence. I will show what and how we can benefit from causal understanding to facilitate efficient, effective, and interpretable generalizations in transfer-learning tasks.

Bio:

Biwei Huang is a final-year Ph.D. candidate at Carnegie Mellon University. Her research interests are mainly in three aspects: (1) automated causal discovery in complex environments with theoretical guarantees, (2) advancing machine learning from the causal perspective, and (3) using or adapting causal discovery approaches to solve scientific problems. Her research contributions have been published in JMLR, ICML, NeurIPS, KDD, AAAI, IJCAI, and UAI. She successfully led a NeurIPS’20 workshop on causal discovery and causality-inspired machine learning and co-organized the first Conference on Causal Learning and Reasoning (CLeaR 2022). She is named a Rising Star of the Trustworthy ML Initiative, and is a recipient of the Presidential Fellowship at CMU in 2017 and the Apple Scholars in AI/ML PhD fellowship in 2021.

Tuesday, 01/18/2022, Time: 11:00am – 12:00pm PSTSafe Policy Learning through Extrapolation: Application to Pre-trial Risk Assessment

Eli Ben-Michael, Postdoctoral Fellow
Harvard University

Location: Public Affairs Building 1234

Abstract:

Algorithmic recommendations and decisions have become ubiquitous in today’s society. Many of these and other data-driven policies, especially in the realm of public policy, are based on known, deterministic rules to ensure their transparency and interpretability. For example, algorithmic pre-trial risk assessments, which serve as our motivating application, provide relatively simple, deterministic classification scores and recommendations to help judges make release decisions. How can we use the data based on existing deterministic policies to learn new and better policies? Unfortunately, prior methods for policy learning are not applicable because they require existing policies to be stochastic rather than deterministic. We develop a robust optimization approach that partially identifies the expected utility of a policy, and then finds an optimal policy by minimizing the worst-case regret. The resulting policy is conservative but has a statistical safety guarantee, allowing the policy-maker to limit the probability of producing a worse outcome than the existing policy. We extend this approach to common and important settings where humans make decisions with the aid of algorithmic recommendations. Lastly, we apply the proposed methodology to a unique field experiment on pre-trial risk assessment instruments. We derive new classification and recommendation rules that retain the transparency and interpretability of the existing instrument while potentially leading to better overall outcomes at a lower cost.

Bio:

Eli Ben-Michael is a postdoctoral fellow in the Department of Statistics and the Institute for Quantitative Social Science at Harvard University. Previously, he received his PhD in Statistics from U.C. Berkeley. Driven by collaborations with social scientists and public health and policy researchers, Eli brings together ideas from statistics, optimization, and machine learning to create methods for credible and robust causal inference and data-driven decision making.

Thursday, 01/13/2022, Time: 11:00am – 12:00pm PSTA Simple Measure of Conditional Dependence

Mona Azadkia, Postdoctoral Fellow
ETH Zürich

Location: Zoom (links are sent to those who are subscribed to our seminars@stat.ucla.edu mailing list)

Abstract:

We propose a coefficient of conditional dependence between two random variables Y and Z given a set of other variables X1,…,Xp, based on an i.i.d. sample. The coefficient has a long list of desirable properties, the most important of which is that under absolutely no distributional assumptions, it converges to a limit in [0,1], where the limit is 0 if and only if Y and Z are conditionally independent given X1,…,Xp, and is 1 if and only if Y is equal to a measurable function of Z given X1,…,Xp. Moreover, it has a natural interpretation as a nonlinear generalization of the familiar partial R2 statistic for measuring conditional dependence by regression. Using this statistic, we devise a new variable selection algorithm, called Feature Ordering by Conditional Independence (FOCI), which is model-free, has no tuning parameters and is provably consistent under sparsity assumptions. A number of applications to synthetic and real datasets are worked out.

Bio:

Mona Azadkia is a postdoctoral fellow at ETH Zürich where she is supervised by Peter Bühlmann. Prior to that, she did her Ph.D. at the Department of Statistics at Stanford University, advised by Sourav Chatterjee. She holds a BSc and MSc in Mathematics from the Sharif University of Technology in Iran. Her research interests are conditional independence testing, causal inference, and non-parametric statistics.

Thursday, 01/06/2022, Time: 11:00am – 12:00pm PSTSpace-Filling Designs for Computer Experiments and Their Application to Big Data Research

Chenlu Shi, Assistant Adjunct Professor
Department of Statistics, UCLA

Location: Zoom (links are sent to those who are subscribed to our seminars@stat.ucla.edu mailing list)

Abstract:

Computer models are powerful tools used to study complex systems from almost every field in natural and social sciences. However, the complexity of the computer model results in the high computational cost for the investigation. This issue calls for computer experiments that aim at building a statistical surrogate model based on a set of data generated by running computer models. Space-filling designs are the most accepted designs for computer experiments. The first part of this talk will give a broad introduction to space-filling designs. Thanks to the guaranteed space-filling property in the low dimensional projections of the input space, a family of space-filling designs, so-called general strong orthogonal arrays, is appealing. We will present some theoretical results on general strong orthogonal arrays in the second part of the talk. In the third part of the talk, the application of space-filling designs to big data research will be discussed. Using limited computing resources, numerous challenges arise from the process of carrying out standard statistical analysis on the large-scale dataset. One way to overcome these challenges is to conduct standard statistical analysis based on a subset of data from full data. Alternatively, in big data analysis, learning techniques have become valuable tools. But, the performance of learning algorithms relies heavily on user-set hyperparameters. In the talk, we will present a subdata selection method for big data, which makes use of the idea of space-filling designs and thus is robust to model misspecifications, and some preliminary results on hyperparameter optimization for learning techniques.

Bio:

Chenlu Shi is an Assistant Adjunct Professor at the Department of Statistics at UCLA. She received her Ph.D. in Statistics from Simon Fraser University, Canada in 2019. Her research interests focus on experimental designs, computer experiments (design and analysis), big data reduction, and hyperparameter optimization for learning techniques.

Thursday, 12/02/2021, Time: 11:00am – 12:00pm PSTExponential-family embedding for cell developmental trajectories

Kevin Lin, Postdoctoral Researcher
Department of Statistics and Data Science, University of Pennsylvania

Abstract:

Scientists often embed cells into a lower-dimensional space when studying single-cell RNA-seq data for improved downstream analyses such as developmental trajectory analyses, but the statistical properties of such nonlinear embedding methods are often not well understood. In this article, we develop the exponential-family SVD (eSVD), a nonlinear embedding method for both cells and genes jointly with respect to a random dot product model using exponential-family distributions. Our estimator uses alternating minimization, which enables us to have a computationally efficient method, prove the identifiability conditions and consistency of our method, and provide statistically principled procedures to tune our method. All these qualities help advance the single-cell embedding literature, and we provide extensive simulations to demonstrate that the eSVD is competitive compared to other embedding methods. We apply the eSVD via Gaussian distributions where the standard deviations are proportional to the means to analyze a single-cell dataset of oligodendrocytes in mouse brains. Using the eSVD estimated embedding, we then investigate the cell developmental trajectories of the oligodendrocytes. While previous results are not able to distinguish the trajectories among the mature oligodendrocyte cell types, our diagnostics and results demonstrate there are two major developmental trajectories that diverge at mature oligodendrocytes.

Background Reading:

Kevin Z. Lin, Jing Lei and Kathryn Roeder
Exponential-family embedding with application to cell developmental trajectories for single-cell RNA-seq data
Journal of the American Statistical Association (JASA) 116.534 (2021): 457-470 (biorxiv) (git)

Location:

The speaker will be presenting through zoom. Participants are recommended to meet in the seminar room (Boelter Hall 2760) at 11:00am Thursday. There will also be a zoom session for those who can not physically attend due to COVID related reasons.

Meeting with the speaker:

There are opportunities to meet with the speaker on Thursday afternoon after the seminar. These meetings will be via zoom. If you wish to meet with the speaker, please contact the seminar organizer with suggested time(s).

Please address any questions to the seminar organizer, Mark S. Handcock (handcock@stat.ucla.edu).

Thursday, 11/18/2021, Time: 1:00pm – 2:00pm PSTRandom Subspace Ensemble

Yang Feng
Department of Biostatistics, New York University

Abstract:

We propose a flexible ensemble framework, Random Subspace Ensemble (RaSE). In the RaSE algorithm, we aggregate many weak learners, where each weak learner is trained in a subspace optimally selected from a collection of random subspaces using a base method. In addition, we show that in a high-dimensional framework, the number of random subspaces needs to be very large to guarantee that a subspace covering signals is selected. Therefore, we propose an iterative version of the RaSE algorithm and prove that under some specific conditions, a smaller number of generated random subspaces are needed to find a desirable subspace through iteration. We study the RaSE framework for classification where a general upper bound for the misclassification rate was derived, and for screening where the sure screening property was established. An extension called Super RaSE was proposed to allow the algorithm to select the optimal pair of base method and subspace during the ensemble process. The RaSE framework is implemented in the R package RaSEn on CRAN.

Background Reading

  1. RaSE: Random Subspace Ensemble Classification, Journal of Machine Learning Research, 22, 45-1, by Ye Tian and Yang Feng (2021)
  2. RaSE: A variable screening framework via random subspace ensembles, Journal of the American Statistical Association, (just-accepted), 1-30, by Ye Tian and Yang Feng (2021)
  3. Super RaSE: Super Random Subspace Ensemble Classification, manuscript, by Jianan Zhu and Yang Feng (2021)

Location:

The speaker will be presenting through zoom. Participants are recommended to meet in the seminar room (MS 5200) at 1:00pm Thursday. There will also be a zoom session for those who can not physically attend due to COVID related reasons.

Please address any questions to the seminar organizer, Mark S. Handcock (handcock@stat.ucla.edu).

Thursday, 10/14/2021, Time: 11:00am – 12:00pm PSTSmall Area Estimation in Low- and Middle-Income Countries

Jon Wakefield
Department of Statistics and Biostatistics, University of Washington, Seattle

Abstract:

The under-five mortality rate (U5MR) is a key barometer of the health of a nation. Unfortunately, many people living in low- and middle-income countries are not covered by civil registration systems. This makes estimation of the U5MR, particularly at the subnational level, difficult. In this talk, I will describe models that have been developed to produce the official United Nations (UN) subnational U5MR estimates in 22 countries. Estimation is based on household surveys, which use stratified, two-stage cluster sampling. I will describe a range of area- and unit-level models and describe the rationale for the modeling we carry out. Data sparsity in time and space is a key challenge, and smoothing models are vital. I will discuss the advantages and disadvantages of discrete and continuous spatial models, in the context of estimation at the scale at which health interventions are made. Other issues that will be touched upon include: design-based versus model-based inference; the inclusion of so-called indirect (summary birth history) data; reproducibility through software availability; benchmarking; how to deal with incomplete geographical data; and working with the UN to produce estimates.

Background Reading

  1. Small Area Estimation for Disease Prevalence Mapping by by Jon Wakefield, Taylor Okonek and Jon Pedersen
  2. Estimating under five mortality in space and time in a developing world context by Jon Wakefield, Geir-Arne Fuglstad, Andrea Riebler, Jessica Godwin, Katie Wilson and Samuel J Clark

Location:

The speaker will be presenting through zoom. Participants are recommended to meet in the seminar room (Boelter Hall 2760) at 11:00am Thursday. There will also be a zoom session for those who can not physically attend due to COVID related reasons.

Please address any questions to the seminar organizer, Mark S. Handcock (handcock@stat.ucla.edu).

Thursday, 10/7/2021, Time: 11:00am – 12:00pm PSTStatistical Inference with Non-probability Survey Samples

Changbao Wu, Professor
Statistics and Actuarial Science, University of Waterloo

Abstract:

We provide an overview of recent development of statistical methodologies for analyzing non-probability survey samples. Inferential frameworks, critical assumptions and the required supplementary population information are discussed. Three general approaches to inference, namely, inverse probability weighting, mass imputation, and doubly robust estimation procedures, are presented. Practical issues in dealing with feasibilities of the assumptions and incomplete sampling frames are also discussed.

Background Reading

  1. Doubly Robust Inference With Nonprobability Survey Samples by Yilin Chen, Pengfei Li and Changbao Wu
  2. Combining non-probability and probability survey samples through mass imputation by Jae Kwang Kim, Seho Park, Yilin Chen and Changbao Wu