Upcoming Weekly Seminar Series

How to Subscribe to the UCLA Statistics Seminars Mailing List

Join the UCLA Statistics seminars mailing list by sending an email to sympa@sympa.it.ucla.edu with “subscribe stat_seminars” (without quotation marks) in the subject field and the message body blank. This needs to be done from the address that is to be subscribed. After doing that please respond to the email that you receive. An automated email will be sent which confirms that you have been added.

How to Unsubscribe from the UCLA Statistics Seminars Mailing List

You may be receiving our seminar emails because you are directly subscribed to our seminars mailing list (or you may be one of our graduate students, undergraduate students, faculty, etc. and are subscribed to a different mailing list that also receives the seminar emails). If you are the former then you may unsubscribe from the seminar mailing list by sending an email to sympa@sympa.it.ucla.edu with “unsubscribe stat_seminars” (without quotation marks) in the subject field and the message body blank. This needs to be done from the address that is subscribed. After sending that email please follow the directions in the email response that you receive.

Viewing our Seminars Remotely

When viewing one of our live seminars remotely, it is optimal to have your Zoom settings such that you are using “Side-by-side: Speaker View”. You can see details of how to do this here.

 

Thursday 02/05/26, Time: 2-3:15pm, Denoising Differentially Private Optimizers

Location: Public Affairs Building 2270

Meisam Razaviyayn, Associate Professor
Departments of Industrial and Systems Engineering, Computer Science, Quantitative and Computational Biology, and Electrical Engineering at the University of Southern California

Abstract:

Differential Private Optimization provides a robust framework for safeguarding individual data during training process of machine learning models. However, the substantial noise injection required (typically added after gradient clipping) often disrupts optimizer dynamics and severely degrades performance in large-scale training. To address this challenge, we introduce a general, optimizer-agnostic framework for denoising privatized gradients. Operating as a modular wrapper, our approach uses noisy gradient observations and provides refined estimates to the optimizer, requiring no internal modifications to standard algorithms such as SGD or Adam, without losing any privacy.

We ground our method in the Kalman Filtering Mechanism and optimal despising of Taylor expansion of the objective function. We translate these theoretical insights into practical, memory-efficient filtering strategies (such as low-pass and Kalman filtering) that generate progressively refined gradient estimations. We establish rigorous privacy-utility trade-off guarantees for these mechanisms, ensuring they remain practical for large-scale applications. Extensive experiments across diverse domains, including vision tasks (CIFAR-100, ImageNet-1k) and language fine-tuning (GLUE, E2E, DART), demonstrate that this framework significantly outperforms state-of-the-art DP baselines, effectively mitigating the utility loss caused by privacy-preserving noise.

Bio:

Meisam Razaviyayn (https://sites.usc.edu/razaviyayn) is an associate professor in the departments of Industrial and Systems Engineering, Computer Science, Quantitative and Computational Biology, and Electrical Engineering at the University of Southern California. He also serves as the associate director of the USC-Meta Center for Research and Education in AI and Learning (https://realai.usc.edu) and is a Faculty Visitor at Google Research. Before joining USC, Meisam was a postdoctoral research fellow in the Department of Electrical Engineering at Stanford University. He earned his PhD in Electrical Engineering with a minor in Computer Science from the University of Minnesota, where he also received his M.Sc. in Mathematics. His research and academic efforts have been recognized with numerous awards, including the 2022 NSF CAREER Award, the 2022 Northrop Grumman Excellence in Teaching Award, the 2021 AFOSR Young Investigator Award, and the 2021 3M Nontenured Faculty Award. He received the 2020 ICCM Best Paper Award in Mathematics and the IEEE-DSW Best Paper Award in 2019, along with the Signal Processing Society Young Author Best Paper Award in 2014. Meisam was among the selected individuals by the National Academy of Engineering for the Frontiers of Engineering Symposium in 2023. Additionally, he was a finalist for the Best Paper Prize for Young Researchers in Continuous Optimization in 2013 and 2016, and a silver medalist in Iran’s National Mathematics Olympiad. His research focuses on the design and analysis of fundamental optimization algorithms relevant to the modern AI era.

Thursday 02/19/26, Time: 2-3:15pm, Two Disciplines, One Mission — A Comparative View on Making Sense of Imperfect Data from Statistical Science to Machine Learning

Location: Public Affairs Building 2270

Grace Y. Yi, Professor
Department of Statistical and Actuarial Sciences & Department of Computer Science, University of Western Ontario

Abstract:

In the data-driven era, data quality plays a pivotal role in ensuring valid statistical inference and robust machine learning performance. Yet, imperfections such as measurement error in predictors and label noise in supervised learning are pervasive across a wide range of domains, including health sciences, epidemiology, economics, and beyond. These imperfections can obscure true patterns, introduce bias, and compromise the reliability of analyses. Such issues have attracted extensive attention from both the statistical and machine learning communities. In this talk, I will offer a brief comparative review of approaches in statistical science and machine learning, highlighting the importance of addressing data quality issues and developing strategies to mitigate their adverse effects on inference and prediction. “Grace Y. Yi is a Professor and Tier I Canada Research Chair in Data Science at the University of Western Ontario. She is the author of the monograph “Statistical Analysis with Measurement Error or Misclassification: Strategy, Method, and Application” (2017) and co-editor of the “Handbook of Measurement Error Models” (Grace Y. Yi, Aurore Delaigle, and Paul Gustafson, 2021). She is also a coauthor of the monograph Likelihood and its Extensions (with Nancy Reid and Cristiano Varin, 2026).

Bio:

Professor Yi is the 2025 Gold Medalist of the Statistical Society of Canada (SSC). She is a Fellow of the Institute of Mathematical Statistics and the American Statistical Association, and an Elected Member of the International Statistical Institute. She received the Award for Excellence in Graduate Student Mentoring from the University of Western Ontario (2023), and delivered the Myra Samuels Memorial Lecture at Purdue University (2025).

Professor Yi served as co-editor-in-chief of the Electronic Journal of Statistics (2022–2024), editor-in-chief of the Canadian Journal of Statistics (2016–2018), and is currently serving as editor of the methodology section of the New England Journal of Statistics in Data Science. She has served as president of the Statistical Society of Canada (2021–2022) and as chair of the Lifetime Data Science Section of the American Statistical Association (2023). In 2012 she founded the first chapter of the International Chinese Statistical Association (ICSA) – the Canada Chapter.