Other UCLA departments frequently hold seminars related to Statistics and of likely of interest to our members. Here are links to UCLA Biostatistics seminars and UCLA Biomath seminars:

Join the UCLA Statistics seminars mailing list by clicking here.

You may be receiving our seminar emails because you are subscribed to our seminars mailing list (or one of our other mailing lists). You can determine which is the case by looking at the subject line of a seminar email. You may unsubscribe from the seminar mailing list by sending an email to with “unsubscribe stat_seminars” (without quotation marks) in the subject field and the message body blank. This needs to be done from the address that is subscribed. After sending that email please following the directions in the email response that you receive.

Tuesday, 05/02/2017, 2:00PM – 3:00pm
Topic: Robust inference in high-dimensional models – going beyond sparsity principles

Kinsey Pavilion 1240B
Jelena Bradic, Assistant Professor of Statistics
Department of Mathematics, University of California, San Diego

In high-dimensional linear models the sparsity assumption is typically made, stating that most of the parameters are equal to zero. Under the sparsity assumption, estimation and, recently, inference have been well studied. However, in practice, sparsity assumption is not checkable and more importantly is often violated, with a large number of covariates expected to be associated with the response, indicating that possibly all, rather than just a few, parameters are non-zero. A natural example is a genome-wide gene expression profiling, where all genes are believed to affect a common disease marker. We show that existing inferential methods are sensitive to the sparsity assumption, and may, in turn, result in the severe lack of control of Type-I error. In this article, we propose a new inferential method, named CorrT, which is robust and adaptive to the sparsity assumption. CorrT is shown to have Type I error approaching the nominal level and Type II error approaching zero, regardless of how sparse or dense the model. In fact, CorrT is also shown to be optimal whenever sparsity holds. Numerical and real data experiments show a favorable performance of the CorrT test compared to the state-of-the-art methods.

Thursday, 5/04/2017, 2:00PM – 3:30pm
de Leeuw Seminar: Festschrift Reloaded – The Lion Strikes Back

Location: 314 Royce Hall

Patrick Mair, Senior Lecturer in Statistics
Department of Psychology, Harvard

Katharine Mullen, Adjunct Assistant Professor
UCLA Department of Statistics

In September 2016 the Journal of Statistical Software (JSS) published a Festschrift for Jan de Leeuw, founding chair of the Department of Statistics at UCLA and founding editor of JSS. The Festschrift commemorated Jan’s retirement as well as the 20-year anniversary of JSS.  Six contributions surveyed Jan’s methodological contributions on topics such as multiway analysis, Gifi, multidimensional scaling, and other, somewhat more exotic scaling approaches. One contribution traced the development of R and other statistical software in the pages of JSS. The final paper by Don Ylvisaker looked back at the early days of the Department of Statistics at UCLA.  In this talk, the editors of the Festschrift reflect on some of the highlights presented in these contributions, discuss Jan’s role in these developments, and outline some newer research topics Jan has been working on over the last few months.

More information is available here.

Tuesday, 05/09/2017, 2:00PM – 3:00pm
Topic: Neyman-Pearson (NP) classification algorithms and NP receiver operating characteristic (NP-ROC)

Kinsey Pavilion 1240B
Xin Tong, Assistant Professor
Department of Data Sciences and Operations, Marshall School of Business, University of Southern California

In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (i.e., the conditional probability of misclassifying a ‘normal’, or class $0$, observation as ‘abnormal’, or class $1$) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (i.e., the conditional probability of misclassifying a class $1$ observation as class $0$) while enforcing an upper bound, $\alpha$, on the type I error. Although the NP paradigm has a century-long history in hypothesis testing, it has not been well recognized and implemented in statistical classification schemes. Common practices that directly limit the empirical type I error to no more than $\alpha$ do not satisfy the type I error control objective because the resulting classifiers are still likely to have type I errors much larger than $\alpha$. As a result, the NP paradigm has not been properly implemented for many classification scenarios in practice. In this work, we develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, including popular methods such as logistic regression, support vector machines and random forests. Powered by this umbrella algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands, motivated by the popular receiver operating characteristic (ROC) curves. NP-ROC bands will help choose $\alpha$ in a data adaptive way, compare different NP classifiers, and detect possible overfitting. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the \verb+R+ package \verb+nproc+, through simulation and real data case studies.