Upcoming Seminars

How to Subscribe to the UCLA Statistics Seminars Mailing List

Join the UCLA Statistics seminars mailing list by sending an email to sympa@sympa.it.ucla.edu with “subscribe stat_seminars” (without quotation marks) in the subject field and the message body blank. This needs to be done from the address that is to be subscribed. After doing that please respond to the email that you receive. An automated email will be sent which confirms that you have been added.

How to Unsubscribe from the UCLA Statistics Seminars Mailing List

You may be receiving our seminar emails because you are directly subscribed to our seminars mailing list (or you may be one of our graduate students, undergraduate students, faculty, etc. and are subscribed to a different mailing list that also receives the seminar emails). If you are the former then you may unsubscribe from the seminar mailing list by sending an email to sympa@sympa.it.ucla.edu with “unsubscribe stat_seminars” (without quotation marks) in the subject field and the message body blank. This needs to be done from the address that is subscribed. After sending that email please follow the directions in the email response that you receive.

Viewing our Seminars Remotely

When viewing one of our live seminars remotely, it is optimal to have your Zoom settings such that you are using “Side-by-side: Speaker View”. You can see details of how to do this here.

Thursday, 02/22/2024, Time: 3:30pm-4:45pm, Towards Fast Mixing MCMC Methods for Structure Learning

Quan Zhou, Assistant Professor
Department of Statistics, Texas A&M University

Abstract:

This talk focuses on Markov chain Monte Carlo (MCMC) methods for structure learning of high-dimensional directed acyclic graph (DAG) models, a problem known to be very challenging because of the enormous search space and the existence of Markov equivalent DAGs. In the first part of the talk, we consider a random walk Metropolis-Hastings algorithm on the space of Markov equivalence classes and show that it has rapid mixing guarantee under some high-dimensional assumptions; in other words, the complexity of Bayesian learning of sparse equivalence classes grows only polynomially in n and p. In the second part of the talk, we propose an empirical Bayes formulation of the structure learning problem, where the prior assumes that all node variables have the same error variance, an assumption known to ensure the identifiability of the underlying DAG. Strong selection consistency for our model is proved under an assumption weaker than equal error variance. To evaluate the posterior distribution, we devise an order-based MCMC sampler and investigate its mixing behavior theoretically. Numerical studies reveal that, interestingly, imposing the equal variance assumption tends to facilitate the mixing of MCMC samplers and improve the posterior inference even when the model is mis-specified.

Biography:

Dr. Quan Zhou is an Assistant Professor of Statistics at Texas A&M University. He received his Ph.D. in statistical genetics from Baylor College of Medicine in 2017 and then spent two years as a postdoctoral research fellow at the Department of Statistics of Rice University. He has worked on the statistical methodology for variable selection, graphical models and randomized controlled trials, and his current research centers on Markov chain Monte Carlo sampling methods and stochastic control problems in data science.