Upcoming Weekly Seminar Series

How to Subscribe to the UCLA Statistics Seminars Mailing List

Join the UCLA Statistics seminars mailing list by sending an email to sympa@sympa.it.ucla.edu with “subscribe stat_seminars” (without quotation marks) in the subject field and the message body blank. This needs to be done from the address that is to be subscribed. After doing that please respond to the email that you receive. An automated email will be sent which confirms that you have been added.

How to Unsubscribe from the UCLA Statistics Seminars Mailing List

You may be receiving our seminar emails because you are directly subscribed to our seminars mailing list (or you may be one of our graduate students, undergraduate students, faculty, etc. and are subscribed to a different mailing list that also receives the seminar emails). If you are the former then you may unsubscribe from the seminar mailing list by sending an email to sympa@sympa.it.ucla.edu with “unsubscribe stat_seminars” (without quotation marks) in the subject field and the message body blank. This needs to be done from the address that is subscribed. After sending that email please follow the directions in the email response that you receive.

Viewing our Seminars Remotely

When viewing one of our live seminars remotely, it is optimal to have your Zoom settings such that you are using “Side-by-side: Speaker View”. You can see details of how to do this here.

Tuesday 01/28/25, Time: 11:00am – 12:15pm, Policy Evaluation in Dynamic Experiments

Location: Mathematical Sciences 8359

Yuchen Hu, Ph.D. Student
Management Science and Engineering, Stanford University

Abstract:

Experiments where treatment assignment varies over time, such as micro-randomized trials and switchback experiments, are essential for guiding dynamic decisions. These experiments often exhibit nonstationarity due to factors like hidden states or unstable environments, posing substantial challenges for accurate policy evaluation. In this talk, I will discuss how Partially Observed Markov Decision Processes (POMDPs) with explicit mixing assumptions provide a natural framework for modeling dynamic experiments and can guide both the design and analysis of these experiments. In the first part of the talk, I will discuss properties of switchback experiments in finite-population, nonstationary dynamic systems. We find that, in this setting, standard switchback designs suffer considerably from carryover bias, but judicious use of burn-in periods can considerably improve the situation and enable errors that decay nearly at the parametric rate. In the second part of the talk, I will discuss policy evaluation in micro-randomized experiments and provide further theoretical grounding on mixing-based policy evaluation methodologies. Under a sequential ignorability assumption, we provide rate-matching upper and lower bounds that sharply characterize the hardness of off-policy evaluation in POMDPs. These findings demonstrate the promise of using stochastic modeling techniques to enhance tools for causal inference. Our formal results are mirrored in empirical evaluations using ride-sharing and mobile health simulators.

Bio:

Yuchen Hu is a Ph.D. candidate in Management Science and Engineering at Stanford University, under the supervision of Professor Stefan Wager. Her research focuses on causal inference, data-driven decision making, and stochastic processes. She is particularly interested in developing interdisciplinary statistical methodologies that enhance the applicability, robustness, and efficiency of data-driven decisions in complex environments. Hu holds an M.S. in Biostatistics from Harvard University and a B.Sc. in Applied Mathematics from Hong Kong Polytechnic University.

Thursday 01/30/25, Time: 11:00am – 12:15pm, Modern Sampling Paradigms: from Posterior Sampling to Generative AI

Location: Franz Hall 2258A

Yuchen Wu, Postdoctoral Researcher
Department of Statistics and Data Science at the Wharton School, University of Pennsylvania

Abstract:

Sampling from a target distribution is a recurring theme in statistics and generative artificial intelligence (AI). In statistics, posterior sampling offers a flexible inferential framework, enabling uncertainty quantification, probabilistic prediction, as well as the estimation of intractable quantities. In generative AI, sampling aims to generate unseen instances that emulate a target population, such as the natural distributions of texts, images, and molecules. In this talk, I will present my works on designing provably efficient sampling algorithms, addressing challenges in both statistics and generative AI. In the first part, I will focus on posterior sampling for Bayes sparse regression. In general, such posteriors are high-dimensional and contain many modes, making them challenging to sample from. To address this, we develop a novel sampling algorithm based on decomposing the target posterior into a log-concave mixture of simple distributions, reducing sampling from a complex distribution to sampling from a tractable log-concave one. We establish provable guarantees for our method in a challenging regime that was previously intractable. In the second part, I will describe a training-free acceleration method for diffusion models, which are deep generative models that underpin cutting-edge applications such as AlphaFold, DALL-E and Sora. Our approach is simple to implement, wraps around any pre-trained diffusion model, and comes with a provable convergence rate that strengthens prior theoretical results. We demonstrate the effectiveness of our method on several real-world image generation tasks. Lastly, I will outline my vision for bridging the fields of statistics and generative AI, exploring how insights from one domain can drive progress in the other.

Bio:

Yuchen Wu is a departmental postdoctoral researcher in the Department of Statistics and Data Science at the Wharton School, University of Pennsylvania. She earned her Ph.D. in 2023 from Stanford University, where she was advised by Professor Andrea Montanari. Her research lies broadly at the intersection of statistics and machine learning, featuring generative AI, high-dimensional statistics, Bayesian inference, algorithm design, and data-driven decision making.

Thursday 02/06/25, Time: 11:00am – 12:15pm, A Unified Framework for Efficient Learning at Scale

Location: Franz Hall 2258A

Soufiane Hayou, Postdoctoral Scholar
Simons Institute, UC Berkeley

Abstract:

State-of-the-art performance is usually achieved via a series of modifications to existing neural architectures and their training procedures. A common feature of these networks is their large-scale nature: modern neural networks usually have billions – if not hundreds of billions – of trainable parameters. While empirical evaluations generally support the claim that increasing the scale of neural networks (width, depth, etc) boosts model performance if done correctly, optimizing the training process across different scales remains a significant challenge, and practitioners tend to follow empirical scaling laws from the literature. In this talk, I will present a unified framework for efficient learning at large scale. The framework allows us to derive efficient learning rules that automatically adjust to model scale, ensuring stability and optimal performance. By analyzing the interplay between network architecture, optimization dynamics, and scale, we demonstrate how these theoretically-grounded learning rules can be applied to both pretraining and finetuning. The results offer new insights into the fundamental principles governing neural network scaling and provide practical guidelines for training large-scale models efficiently.

Bio:

Soufiane Hayou is currently a postdoctoral researcher at Simons Institute, UC Berkeley. He was a visiting assistant professor of mathematics at the National University of Singapore for the last 3 years. He obtained his PhD in statistics and machine learning in 2021 from the University of Oxford, and graduated from Ecole Polytechnique in Paris before joining Oxford. His research is mainly focused on the theory and practice of learning at scale: theoretical analysis of large scale neural networks with the goal of obtaining principled methods for training/finetuning. Topics include depth scaling (Stable ResNet), hyperparameter transfer (Depth-muP parametrization), efficient finetuning (LoRA+, a method that improves upon LoRA by setting optimal learning rates for matrices A and B) etc.

Tuesday 02/11/25, Time: 11:00am – 12:15pm, Causal Fairness Analysis

Location: Mathematical Sciences 8359

Drago Plecko, Postdoctoral Scholar
Department of Computer Science, Columbia University

In this talk, we discuss the foundations of fairness analysis through the lens of causal inference, also paying attention to how questions of fairness compound with the use of artificial intelligence (AI). In particular, the framework of Causal Fairness Analysis is introduced, which distinguishes three fairness tasks: (i) bias detection, (ii) fair prediction, and (iii) fair decision-making. In bias detection, we demonstrate how commonly used statistical measures of disparity cannot distinguish between causally different explanations of the disparity, and we discuss causal tools that bridge this gap. In fair prediction, we discuss how an automated predictor may inherit bias from human-generated labels, and how this can be formally tested and subsequently mitigated. For the task of fair decision-making, we discuss how human or AI decision-makers design policies for treatment allocation, focusing on how much a specific individual would benefit from treatment, counterfactually speaking, when contrasted with an alternative, no-treatment scenario. We discuss how historically disadvantaged groups may differ in their distribution of covariates and, therefore, their benefit from treatment may differ, possibly leading to disparities in resource allocation. The discussion of each task is accompanied by real-world examples, in an attempt to build a catalog of different fairness settings. We also take a deeper look into applying Causal Fairness Analysis to explain racial and ethnic disparities following admission to an intensive care unit (ICU). Our analysis reveals that minority patients are much more likely to be admitted to the ICU, and that this increase in admission is linked with lack of access to primary care. This leads us to construct the Indigenous Intensive Care Equity (IICE) Radar, a monitoring system for tracking the over-utilization of ICU resources by the Indigenous population of Australia across geographical areas, opening the door for targeted public health interventions aimed at improving health equity.

Related papers:
[1] Plečko, Drago, and Elias Bareinboim. “”Causal fairness analysis: a causal toolkit for fair machine learning.”” Foundations and Trends® in Machine Learning 17.3 (2024): 304-589.
[2] Plecko, Drago, et al. An Algorithmic Approach for Causal Health Equity: A Look at Race Differentials in Intensive Care Unit (ICU) Outcomes. arXiv preprint arXiv:2501.05197 (2025).

Bio:

Drago Plecko is a postdoctoral scholar in the Department of Computer Science at Columbia University, having joined after completing his PhD in Statistics at ETH Zürich. His research focuses on causal inference, and spans several topics in trustworthy data science, including fairness, recourse, and explainability. Drago also has a strong interest in applied problems, particularly in medicine, where he investigated epidemiological questions in intensive care medicine.