Seminars

Other UCLA departments frequently hold seminars related to Statistics and of likely of interest to our members. Here are links to UCLA Biostatistics seminars and UCLA Biomath seminars:
https://www.biostat.ucla.edu/2018-seminars
http://www.biomath.ucla.edu/seminars/

Join the UCLA Statistics seminars mailing list by sending an email to sympa@sympa.cts.ucla.edu with “subscribe stat_seminars” (without quotation marks) in the subject field and the message body blank. This needs to be done from the address that is to be subscribed. After doing that please respond to the email that you receive. An automated email will be sent which confirms that you have been added.

You may be receiving our seminar emails because you are subscribed to our seminars mailing list (or one of our other mailing lists). You can determine which is the case by looking at the subject line of a seminar email. You may unsubscribe from the seminar mailing list by sending an email to sympa@sympa.cts.ucla.edu with “unsubscribe stat_seminars” (without quotation marks) in the subject field and the message body blank. This needs to be done from the address that is subscribed. After sending that email please follow the directions in the email response that you receive. If you are getting our seminar emails because of a subscription to one of our other mailing lists then the word “seminars” in the subject field must have the appropriate replacement.

Tuesday, 04/23/2019, Time: 2:00PM
Statistics Weekly Seminar
Towards Understanding Overparameterized Deep Neural Networks: From Optimization To Generalization

Royce 156
Quanquan Gu, Assistant Professor
UCLA Department of Computer Science

Deep learning has achieved tremendous successes in many applications. However, why deep learning is so powerful is still less well understood. One of the mysteries is that deep neural networks used in practice are often heavily over-parameterized such that they can even fit random labels to the input data, while they can still achieve very small test error when trained with real labels. In order to understand this phenomenon, in this talk, I will first show that with over-parameterization and a proper random initialization, gradient-based methods can find the global minima of the training loss for DNNs with the ReLU activation function. Then I will show under certain assumption on the data distribution, gradient descent with a proper random initialization is able to train a sufficiently over-parameterized DNN to achieve arbitrarily small test error. This leads to an algorithmic-dependent generalization error bound for deep learning. I will conclude by discussing implications, challenges and future work along this line of research.

Quanquan Gu is an Assistant Professor of Computer Science at UCLA. His current research is in the area of artificial intelligence and machine learning, with a focus on developing and analyzing nonconvex optimization algorithms for machine learning to understand large-scale, dynamic, complex and heterogeneous data, and building the theoretical foundations of deep learning. He received his Ph.D. degree in Computer Science from the University of Illinois at Urbana-Champaign in 2014. He is a recipient of the Yahoo! Academic Career Enhancement Award in 2015, NSF CAREER Award in 2017, Adobe Data Science Research Award and Salesforce Deep Learning Research Award in 2018, and Simons Berkeley Research Fellowship in 2019.

Tuesday, 4/23/2019, Time: 4:00pm – 5:30pm
2019 De Leeuw Seminar

California NanoSystems Institute (CNSI) Auditorium
Roger Peng
Department of Biostatistics, Johns Hopkins University

Please RSVP here. A flyer for the seminar is available here.

The data revolution has led to an increased interest in the practice of data analysis, which most would agree is a fundamental aspect of a broader definition of data science. But how well can we characterize data analysis and communicate its fundamental principles? Previous work has largely focused on the “forward mechanism” of data analysis by trying to model and understand the cognitive processes that govern data analyses. While developing such an understanding has value, it largely focuses on unobserved phenomena. An alternate approach characterizes data analyses based on their observed outputs and develops principles or criteria for comparing one to another. Furthermore, these principles can be used to formalize a definition of a successful analysis. In general, the theoretical basis for data analysis leaves much to be desired, and in this talk I will attempt to sketch a foundation upon which we can hopefully make progress.

Roger D. Peng is a Professor of Biostatistics at the Johns Hopkins Bloomberg School of Public Health where his research focuses on the development of statistical methods for addressing environmental health problems. He is also a co-founder of the Johns Hopkins Data Science Specialization, the Simply Statistics blog, the Not So Standard Deviations podcast, and The Effort Report podcast. He is a Fellow of the American Statistical Association and is the recipient of the 2016 Mortimer Spiegelman Award from the American Public Health Association.