Tuesday, 03/03/2020, Time: 11:00am – 12:15pmHigh-Dimensional Principal Component Analysis with Heterogeneous Missingness
Rolfe Hall 3126
Prof. Ziwei Zhu, Assistant Professor of Statistics
University of Michigan
A UCLA Statistics and ECE Joint Seminar
Abstract:
In this talk, I will focus on the effect of missing data in Principal Component Analysis (PCA). In simple, homogeneous missingness settings with a noise level of constant order, we show that an existing inverse-probability weighted (IPW) estimator of the leading principal components can (nearly) attain the minimax optimal rate of convergence, and discover a new phase transition phenomenon along the way. However, deeper investigation reveals both that, particularly in more realistic settings where the missingness mechanism is heterogeneous, the empirical performance of the IPW estimator can be unsatisfactory, and moreover that, in the noiseless case, it fails to provide exact recovery of the principal components. Our main contribution, then, is to introduce a new method for high-dimensional PCA, called “primePCA”, that is designed to cope with situations where observations may be missing in a heterogeneous manner. Starting from the IPW estimator, “primePCA” iteratively projects the observed entries of the data matrix onto the column space of our current estimate to impute the missing entries, and then updates our estimate by computing the leading right singular space of the imputed data matrix. It turns out that the interaction between the heterogeneity of missingness and the low-dimensional structure is crucial in determining the feasibility of the problem. We therefore introduce an incoherence condition on the principal components and prove that in the noiseless case, the error of “primePCA” converges to zero at a geometric rate when the signal strength is not too small. An important feature of our theoretical guarantees is that they depend on average, as opposed to worst-case, properties of the missingness mechanism. Our numerical studies on both simulated and real data reveal that “primePCA” exhibits very encouraging performance across a wide range of scenarios.
Bio:
Ziwei Zhu is currently an assistant professor at the Department of Statistics at the University of Michigan, Ann Arbor. Prior to this, he was a post-doc researcher at the University of Cambridge, hosted by Professor Richard Samworth. He received his Ph.D. in Operations Research and Financial Engineering from Princeton University, advised by Professor Jianqing Fan. His research focuses on distributed statistical inference, robust statistics and low-rank matrix estimation.
Wednesday, 02/25/2020, Time: 11:00amSpace-filling Designs for Computer Experiments and Their Application to Big Data Research
Rolfe 3126
Chenlu Shi, Assistant Adjunct Professor
UCLA Department of Statistics
Abstract:
Computer experiments provide useful tools for investigating complex systems, and they call for space-filling designs, which are a class of designs that allow the use of various modeling methods. He and Tang (2013) introduced and studied a class of space-filling designs, strong orthogonal arrays. To date, an important problem that has not been addressed in the literature is that of design selection for such arrays. In this talk, I will first give a broad introduction to space-filling designs, and then present some results on the selection of strong orthogonal arrays. The second part of my talk will present some preliminary work on the application of space-filling designs to big data research. Nowadays, it is challenging to use current computing resources to analyze super-large datasets. Subsampling-based methods are the common approaches to reducing data sizes, with the leveraging method (Ma and Sun, 2014) being the most popular. Recently, a new approach, information-based optimal subdata selection (IBOSS) method was proposed (Wang, Yang and Stufken, 2018), which applies the design methodology to the big data problem. However, both the leveraging method and the IBOSS method are model-dependent. Space-filling designs do not suffer this drawback, as shown in our simulation studies.
Wednesday, 02/12/2020, Time: 4:00pmThe Blessings of Multiple Causes
Mathematical Sciences 6229
Yixin Wang, Ph.D. Student
Columbia University
Abstract:
Causal inference from observational data is a vital problem, but it comes with strong assumptions. Most methods assume that we observe all confounders, variables that affect both the causal variables and the outcome variables. But whether we have observed all confounders is a famously untestable assumption. We describe the deconfounder, a way to do causal inference from observational data allowing for unobserved confounding.
How does the deconfounder work? The deconfounder is designed for problems of multiple causal inferences: scientific studies that involve many causes whose effects are simultaneously of interest. The deconfounder uses the correlation among causes as evidence for unobserved confounders, combining unsupervised machine learning and predictive model checking to perform causal inference. We study the theoretical requirements for the deconfounder to provide unbiased causal estimates, along with its limitations and tradeoffs. We demonstrate the deconfounder on real-world data and simulation studies.
Bio:
Yixin Wang completed undergraduate studies in mathematics and computer science at the Hong Kong University of Science and Technology. She works in the fields of Bayesian statistics, machine learning, and causal inference. Her research interests lie in the intersection of theory and applications. In 2016, she also was a coach of Theoretical Statistics for the Ph.D. Qualifying Exam at Columbia Statistics in 2016.
Tuesday, 02/11/2020, Time: 11:00amMulti-Resolution Functional ANOVA for Large-Scale, Many-Input Computer Experiments
Rolfe Hall 3126
Chih‐Li Sung, Assistant Professor
Michigan State University
Abstract:
The Gaussian process is a standard tool for building emulators for both deterministic and stochastic computer experiments. However, application of Gaussian process models is greatly limited in practice, particularly for large-scale and many-input computer experiments that have become typical. In this talk, a multi-resolution functional ANOVA model will be introduced as a computationally feasible emulation alternative. More generally, this model can be used for large-scale and many-input non-linear regression problems.
Bio:
Chih‐Li Sung received a Ph.D. at the Stewart School of Industrial & Systems Engineering at Georgia Tech in 2018. He was also jointly advised by Profs. C. F. Jeff Wu and Benjamin Haaland. He received a B.S in applied mathematics and an M.S. in statistics from National Tsing Hua University in 2008 and 2010, respectively. His research interests include computer experiment, uncertainty quantification, machine learning, big data, and applications of statistics in engineering. Some awards he has received include the Alice and John Jarvis Ph.D. Student Research Award (Honorable Mention) at ISyE, Georgia Tech, and the best student poster at Georgia Statistics Day 2017 and ISBIS 2017 Meeting.
Friday, 02/07/2020, Time: 1:15pmStatistical and Computational Perspectives on Latent Variable Models
Young Hall 4242
Nhat Ho, Postdoctoral Fellow
UC Berkeley
Abstract:
The growth in scope and complexity of modern data sets presents the field of statistics and data science with numerous inferential and computational challenges, among them how to deal with various forms of heterogeneity. Latent variable models provide a principled approach to modeling heterogeneous collections of data. However, due to the over-parameterization, it has been observed that parameter estimation and latent structures of these models have non-standard statistical and computational behaviors. In this talk, we provide new insights into these behaviors under mixture models, a building block of latent variable models. From the statistical viewpoint, we propose a general framework for studying the convergence rates of parameter estimation in mixture models based on Wasserstein distance. Our study makes explicit the links between model singularities, parameter estimation convergence rates, and the algebraic geometry of the parameter space for mixtures of continuous distributions. From the computational side, we study the non-asymptotic behavior of the EM algorithm under the over-specified settings of mixture models in which the likelihood need not be strongly concave, or, equivalently, the Fisher information matrix might be singular. Focusing on the simple setting of a two-component mixture fit with equal mixture weights to a multivariate Gaussian distribution, we demonstrate that EM updates converge to a fixed point at Euclidean distance O((d/n)1/4) from the true parameter after O((d/n)1/2) steps where d is the dimension. From the methodological standpoint, we develop computationally efficient optimization- based methods for the multilevel clustering problem based on Wasserstein distance. Experimental results with large-scale real-world datasets demonstrate the flexibility and scalability of our approaches. If time allows, we further discuss a novel post-processing procedure, named Merge-Truncate-Merge algorithm, to determine the true number of components in a wide class of latent variable models.
Bio:
Nhat Ho is currently a postdoctoral fellow in the Electrical Engineering and Computer Science (EECS) Department where he is supervised by Professor Michael I. Jordan and Professor Martin J. Wainwright. Before going to Berkeley, he finished his Ph.D. degree in 2017 at the Department of Statistics, University of Michigan, Ann Arbor where he was advised by Professor Long Nguyen and Professor Ya’acov Ritov. His current research focuses on the interplay of four principles of statistics and data science: heterogeneity of data, interpretability of models, stability, and scalability of optimization and sampling algorithms.
Tuesday, 02/04/2020, Time: 11:00am – 12:00pmGeneralization Error of Linearized Neural Networks: Staircase and Double-descent
Rolfe Hall 3126
Song Mei, Ph.D. Student
Stanford University
Abstract:
Deep learning methods operate in regimes that defy the traditional statistical mindset. Despite the non-convexity of empirical risks and the huge complexity of neural network architectures, stochastic gradient algorithms can often find the global minimizer of the training loss and achieve small generalization error on test data. As one possible explanation to the training efficiency of neural networks, tangent kernel theory shows that a multi-layers neural network – in a proper large width limit – can be well approximated by its linearization. As a consequence, the gradient flow of the empirical risk turns into a linear dynamics and converges to a global minimizer. Since last year, linearization has become a popular approach in analyzing training dynamics of neural networks. However, this naturally raises the question of whether the linearization perspective can also explain the observed generalization efficacy. In this talk, I will discuss the generalization error of linearized neural networks, which reveals two interesting phenomena: the staircase decay and the double-descent curve. Through the lens of these phenomena, I will also address the benefits and limitations of the linearization.
Bio:
Song Mei is motivated by data science and lies at the intersection of statistics, machine learning, information theory, and computer science. He often builds on insights that originated within the statistical physics literature. Currently, he is interested in theory of deep learning, high dimensional geometry, approximate Bayesian inferences, and applied random matrix theory. His research is partly supported by Stanford Graduate Fellowship.
Friday, 01/31/2020, Time: 1:00pm – 2:00pmFréchet Change Point Detection
Young Hall 4242
Paromita Dubey, Postdoctoral Scholar
University of California, Davis
Abstract:
Change point detection is a popular tool for identifying locations in a data sequence where an abrupt change occurs in the data distribution and has been widely studied for Euclidean data. Modern data very often is non-Euclidean, for example distribution valued data or network data. Change point detection is a challenging problem when the underlying data space is a metric space where one does not have basic algebraic operations like addition of the data points and scalar multiplication. In this talk, I propose a method to infer the presence and location of change points in the distribution of a sequence of independent data taking values in a general metric space. Change points are viewed as locations at which the distribution of the data sequence changes abruptly in terms of either its Fréchet mean or Fréchet variance or both. The proposed method is based on comparisons of Fréchet variances before and after putative change point locations. First, I will establish that under the null hypothesis of no change point the limit distribution of the proposed scan function is the square of a standardized Brownian Bridge. It is well known that such convergence is rather slow in moderate to high dimensions. For more accurate results in finite sample applications, I will provide a theoretically justified bootstrap-based scheme for testing the presence of change points. Next, I will show that when a change point exists, (1) the proposed test is consistent under contiguous alternatives and (2) the estimated location of the change-point is consistent. All of the above results hold for a broad class of metric spaces under mild entropy conditions. Examples include the space of univariate probability distributions and the space of graph Laplacians for networks. I will illustrate the efficacy of the proposed approach in empirical studies and in real data applications with sequences of maternal fertility distributions. Finally, I will talk about some future extensions and other related research directions, for instance, when one has samples of dynamic metric space data. This talk is based on joint work with Prof. Hans-Georg Müller.
Bio:
The research of Paromita Dubey centers around developing statistical methods for non-Euclidean data, examples being distribution and network valued data. Dr. Dubey is also interested in functional and longitudinal data analysis, with focus on studying samples of dynamic object data. Dr. Dubey recently presented work at the Research Section Discussion Meeting of the Royal Statistical Society.
Tuesday, 01/28/2020, Time: 11:00am – 12:00pmSome Recent Advances in Modeling, Estimation and Inference for Vector Autoregressive Models
Rolfe 3126
George Michailidis, Professor
University of Florida
Abstract:
Vector autoregressive models capture temporal interconnections between temporally evolving entities (variables). They have been extensively used in macroeconomic and financial modeling and more recently they have found novel applications in functional genomics and neuroscience. In this presentation, I provide a brief overview of recent advances on their modeling and estimation issues in the high dimensional setting. Subsequently, I discuss some recent results on statistical inference for the model parameters and briefly touch upon issues of robustness. The results are illustrated on both synthetic and real data.
Bio:
George Michailidis is a Professor of Statistics and Director of the Informatics Institute at the University of Florida. His research interests include Multivariate Analysis and Machine Learning, Computational Statistics, Change-point Estimation, Stochastic Processing Networks, Bioinformatics, Network Tomography, Visual Analytics, Statistical Methodology with Applications to Computer, Communications and Sensor Networks.
Tuesday, 01/21/2020, Time: 11:00am – 12:15pmSome Statistical Results on Deep Learning: Interpolation, Optimality and Sparsity
Rolfe 3126
Guang Cheng, Professor
Purdue University
Abstract:
This talk addresses three theoretical aspects of deep learning from a statistical perspective: interpolation, optimality and sparsity. The first one attempts to interpret the recent double descent phenomenon by precisely characterizing a U-shaped risk curve within the “over-fitting regime,” while the second one develops a new type of statistical optimality for explaining the empirical successes of neural network classification in a teacher-student framework. In the end, this talk is concluded by proposing a generic training algorithm of sparse neural networks with statistical oracle property. All three pieces aim to blaze a statistical trail through the deep learning jungle.
Bio:
Guang Cheng is Professor of Statistics in the Department of Statistics at Purdue University. Some research interests include semi-nonparametric inferences, big data, machine learning, deep/reinforcement learning, and high dimensional statistical inferences. He is also affiliated with the Purdue Institute of Drug Discovery.
Tuesday, 01/14/2020, Time: 11:00am – 12noonVeridical Data Science
Mong Learning Center (on the first floor) – Engineering VI
Bin Yu, Professor
UC Berkeley Departments of Statistics and EECS
Abstract:
Veridical data science extracts reliable and reproducible information from data, with an enriched technical language to communicate and evaluate empirical evidence in the context of human decisions and domain knowledge. Building and expanding on principles of statistics, machine learning, and the sciences, we propose the predictability, computability, and stability (PCS) framework for veridical data science. Our framework is comprised of both a workflow and documentation and aims to provide responsible, reliable, reproducible, and transparent results across the entire data science life cycle. Moreover, we propose the PDR desiderata for interpretable machine learning as part of veridical data science (with PDR standing for predictive accuracy, predictive accuracy and relevancy to a human audience and a particular domain problem).
Bio:
Bin Yu is Chancellor’s Professor in the Departments of Statistics and of Electrical Engineering & Computer Sciences at the University of California at Berkeley and a former chair of Statistics at UC Berkeley. Her research focuses on practice, algorithm, and theory of statistical machine learning and causal inference. Her group is engaged in interdisciplinary research with scientists from genomics, neuroscience, and precision medicine.
In order to augment empirical evidence for decision-making, she and her group are investigating methods/algorithms (and associated statistical inference problems) such as dictionary learning, non-negative matrix factorization (NMF), EM and deep learning (CNNs and LSTMs), and heterogeneous effect estimation in randomized experiments (X-learner). Their recent algorithms include staNMF for unsupervised learning, iterative Random Forests (iRF) and signed iRF (s-iRF) for discovering predictive and stable high-order interactions in supervised learning, contextual decomposition (CD) and aggregated contextual decomposition (ACD) for phrase or patch importance extraction from an LSTM or a CNN.
She is a member of the U.S. National Academy of Sciences and Fellow of the American Academy of Arts and Sciences. She was a Guggenheim Fellow in 2006, and the Tukey Memorial Lecturer of the Bernoulli Society in 2012. She was President of IMS (Institute of Mathematical Statistics) in 2013-2014 and the Rietz Lecturer of IMS in 2016. She received the E. L. Scott Award from COPSS (Committee of Presidents of Statistical Societies) in 2018. She was a founding co-director of the Microsoft Research Asia (MSR) Lab at Peking University and is a member of the scientific advisory board at the Alan Turning Institute in the UK.
Friday, 01/10/2020, Time: 2:30pmRobust Variable Selection Method in Linear Models
Boyer 130
Gabriela Cohen Freue, Associate Professor of Statistics
University of British Columbia
Abstract:
In many current applications scientists can easily measure a very large number of variables (for example, hundreds of protein levels) some of which are expected be useful to explain or predict a specific response variable of interest using linear models. These potential explanatory variables are most likely to contain redundant or irrelevant information, and in many cases, their quality and reliability may be suspect. We developed two penalized robust regression estimators using an elastic net penalty that can be used to identify a useful subset of explanatory variables to predict the response, while protecting the resulting estimations against possible aberrant observations in the data set. In this talk, I will present the new estimators and an algorithm to compute them. I will also illustrate their performances in a simulation study and a proteomics biomarker study of cardiac allograft vasculopathy. Joint work with Professor Matias Salibian-Barrera, Ezequiel Smucler (former PDF), and David Kepplinger (PhD candidate) from UBC.
Bio:
Gabriela Cohen Freue has completed her Ph.D in Statistics from the University of Maryland at Collage Park and postdoctoral studies in Biostatistics through her participation in the Biomarkers in Transplantation (BiT) initiative, hosted by the University of British Columbia in Vancouver. She then joined PROOF Centre of Excellence where she led the statistical analysis of proteomics data. She is now an Associate Professor in the Department of Statistics at the University of British Columbia and a Canada Research Chair-II in Statistical Proteomics. Her research interests are in robust estimation and regularization of linear models with applications to Statistical Genomics and Proteomics.
Tuesday, 11/26/2019, Time: 11:00amMethodological Advances in Non-parametric Spatio-temporal Point Process Models
Public Affairs Building 1234
Junhyung Park, Graduate Student
UCLA Department of Statistics
With point process data, we are interested in modeling and predicting the occurrence of events, especially those that are clustered and self-exciting. New methodological challenges are posed by the rapid growth in the potential areas of application for statistical point process models. We present recent work on applying point processes to model disease outbreaks and retaliations in violent gang crimes. Some applied methodological novelties for each area are discussed.
Junhyung Park is a fifth year graduate student working under Rick Schoenberg. He has been working on developing applied methodology for point process models. Prior to coming to Los Angeles, he studied economics and mathematics at the University of Virginia and worked as a research associate at the International Monetary Fund. Prior to joining the Statistics Department, Junhyung earned master’s degrees in economics and biostatistics at UCLA.
Wednesday, 11/20/2019, Time: 3:30pmCommunication-Efficient Accurate Statistical Estimation
CHS 33-105
Jianqing Fan, Professor of Statistics and Finance
Princeton University
Webpage: https://orfe.princeton.edu/~jqfan/
A UCLA Statistics and Biostatistics Joint Seminar
Abstract:
When the data are stored in a distributed manner, direct application of traditional statistical inference procedures is often prohibitive due to communication cost and privacy concerns. This paper develops and investigates two Communication-Efficient Accurate Statistical Estimators (CEASE), implemented through iterative algorithms for distributed optimization. In each iteration, node machines carry out computation in parallel and communicates with the central processor, which then broadcasts aggregated gradient vector to node machines for new updates. The algorithms adapt to the similarity among loss functions on node machines, and converge rapidly when each node machine has large enough sample size. Moreover, they do not require good initialization and enjoy linear converge guarantees under general conditions. The contraction rate of optimization errors is derived explicitly, with dependence on the local sample size unveiled. In addition, the improved statistical accuracy per iteration is derived. By regarding the proposed method as a multi-step statistical estimator, we show that statistical efficiency can be achieved in finite steps in typical statistical applications. In addition, we give the conditions under which one-step CEASE estimator is statistically efficient. Extensive numerical experiments on both synthetic and real data validate the theoretical results and demonstrate the superior performance of our algorithms. (Joint work with Yongyi Guo and Kaizheng Wang)
Tuesday, 11/12/2019, Time: 11:00amNewton Methods for Convolutional Neural Networks
Public Affairs Building 1234
Chih-Jen Lin, Professor
National Taiwan University
Abstract:
Deep learning involves a difficult non-convex optimization problem, which is often solved by stochastic gradient (SG) methods. While SG is usually effective, it is sometimes not very robust. Recently, Newton methods have been considered as an alternative optimization technique but many issues must be addressed for making the method practical. In this research project, we consider Convolutional Neural Networks (CNN) and conduct a detailed investigation of Newton methods. We broadly discuss issues including main operations, computational cost, and memory usage. Preliminary experiments indicate that Newton methods are less sensitive to parameters in comparison with stochastic gradient approaches. A software project for this research work is available at https://github.com/cjlin1/simpleNN.
Bio:
Chih-Jen Lin is currently a distinguished professor at the Department of Computer Science, National Taiwan University. He obtained his B.S. degree from National Taiwan University in 1993 and Ph.D. degree from University of Michigan in 1998. His major research areas include machine learning, data mining, and numerical optimization. He is best known for his work on support vector machines (SVM) for data classification. His software LIBSVM is one of the most widely used and cited SVM packages. For his research work he has received many awards, including the ACM KDD 2010 and ACM RecSys 2013 best paper awards. He is an IEEE fellow, a AAAI fellow, and an ACM fellow for his contribution to machine learning algorithms and software design. More information about him can be found at http://www.csie.ntu.edu.tw/~cjlin.
Tuesday, 11/05/2019, Time: 11:00am-12:15pmData Science @ Google
Public Affairs Building 1234
David Diez, Data Scientist at Google
Abstract:
Interested to learn more about what it’s like to be a Data Scientist at Google? Join us for this talk to hear from a Googler about their experience. Feel free to come with questions!
Bio:
David Diez received his PhD in Statistics from UCLA in 2010. He was a postdoc at Harvard University from 2010 to 2012, where he did public health research on smoking bans and air pollution, while continuing to teach. In 2012, he joined the YouTube Data Science team at Google and has worked on a variety of projects, including modeling user presence based on website interactions, experiment design and analysis across a variety of contexts, and most recently working on projects related to Trust and Safety. Beyond Google, David’s interests extend to increasing access to education through OpenIntro (openintro.org), a nonprofit that he co-founded and started while at UCLA, which features open-source intro statistics textbooks used by about 15,000 US college students every year and thousands of students in other countries.
Wednesday, 10/30/2019, Time: 3:30pm – 4:30pmThe Role of Preferential Sampling in Spatial and Spatio-temporal Geostatistical Modeling
CHS 33-105A
Alan E. Gelfand, Emeritus Professor
Department of Statistical Science, Duke University
A UCLA Statistics and Biostatistics Joint Seminar
Abstract: TBA
Refreshments served at 3:00 PM in room 51-254 CHS
Friday, 10/18/2019, Time: 11amKnockoffs or perturbations, that is a question
Physics and Astronomy Building 1434A
Jun Liu, Professor
Department of Statistics, Harvard University
Abstract
Simultaneously finding multiple influential variables and controlling the false discovery rate (FDR) for linear regression models is a fundamental problem with a long history. Researchers recently have proposed and examined a few innovative approaches surrounding the idea of creating “knockoff” variables (like spike-ins in biological experiments) to control FDR. As opposed to creating knockoffs, a classical statistical idea is to introduce perturbations and examine the impacts. We introduce here a perturbation-based Gaussian Mirror (GM) method, which creates for each predictor variable a pair of perturbed “mirror variables” by adding and subtracting a randomly generated Gaussian random variable, and proceeds with a certain regression method, such as the ordinary least-square or the Lasso. The mirror variables naturally lead to a test statistic highly effective for controlling the FDR. The proposed GM method does not require strong conditions for the covariates, nor any knowledge of the noise level and relative magnitudes of dimension p and sample size n. We observe that the GM method is more powerful than many existing methods in selecting important variables, subject to the control of FDR especially under the case when high correlations among the covariates exist. Additionally, we provide a method to reliably estimate a confidence interval and upper bound for the number of false discoveries. If time permits, I will also discuss a simpler bootstrap-type perturbation method for estimating FDRs, which is also more powerful than knockoff methods when the predictors are reasonably correlated. The presentation is based on joint work with Xing Xin and Chenguang Dai.
To find out more about our speaker, please visit: http://sites.fas.harvard.edu/~junliu/
Tuesday, 10/08/2019, Time: 11AMFrom Machine Learning to Machine Programming
Public Affairs 1234
Armando Solar-Lezama, Associate Professor
Computer Science & Artificial Intelligence Lab, MIT
Abstract
Program synthesis, the ability to generate programs from very high-level descriptions of their intended behavior, has become an important area of research in programming systems over the past decade. Thanks to advances in formal methods and the incorporation of machine learning, our ability to generate programs has improved to the point where it is becoming feasible to automate certain classes of programming tasks. But the applications of this technology go beyond software engineering. In some cases, program synthesis could serve as a direct replacement to more established machine learning techniques, especially in settings where data efficiency, interpretability or verifiability are important goals. Even in cases where full replacement of established machine learning techniques is not feasible, insights from program synthesis can help address problems such as explaining decisions made by neural network models.
Tuesday, 10/01/2019, Time: 11:00AMData Blitz from Graduate Students
Public Affairs Building (PAB) 1234
Graduate Students from UCLA Statistics
We will kick off the statistics seminar series by inviting three junior graduate students as our speakers. Each one will present an on-going research for 15 minutes, followed by a 3-minute Q&A session. Listening from graduate students is the best way to find out cutting edge research in the statistics department. Looking forward to seeing you all.
Speaker: Tianyi Sun
Advisor: Jessica Li
Title: Modeling the Multivariate Structure of Single-Cell RNA-seq Data
Speaker: Stephanie Stacy
Advisor: Tao Gao
Title: Intuitive Communication Through an Imagined ‘We’
Speaker: Michael Schatz
Advisor: Rick Schoenberg
Title: The ARMA Point Process
Wednesday, 09/25/2019, Time: 2:30PMStructure v. Scale: A Case for Enhanced Inference
Bolter Hall 5249
Hamid Krim, Professor
ECE Dept., VISSTA Laboratory, North Carolina State University
Abstract
Relatively to its low dimensional counterpart, high dimensional data exhibit diverse distinct properties, and capturing these in presence of the formidable computational cost typical of classical approaches can be a challenge. Given a typically limited number of degrees of freedom of any data, we propose a lower rank structure for the information space relative to its embedding space. We further argue that the self-representative nature of the data strongly suggests the flexible structure of union-of-subspaces (UoS) model, as a generalization of a linear subspace model. This proposed structure preserves the simplicity of linear subspace models, with an additional capacity of a piece-wise linear approximation of nonlinear data. We show a sufficient condition to use l1 minimization to reveal the underlying UoS structure, and further propose a bi-sparsity model (RoSure) as an effective strategy, to recover the given data characterization by the UoS model from non-conforming errors/corruptions. This structural characterization, albeit powerful for many applications, can be shown to be limited in large scale data (images) commonly shared features. We make a case for further refinement by invoking a joint and principled scale-structure atomic characterization, which is demonstrated to improve performance. This resulting Deep Dictionary Learning approach is based on symbiotically formulating a classification problem regularized by a reconstruction problem. A theoretical rationale is also provided to contrast this work to Convolutional Neural Networks, with a demonstrably competitive performance. Substantiating examples are provided, and the application and performance of these approaches are shown for a wide range of problems such as video segmentation and object classification.
Bio:
Hamid Krim (ahk@ncsu.edu) received his BSc. MSc. and Ph.D. in Electrical Engineering. He was a Member of Technical Staff at AT&T Bell Labs, where he has conducted R&D in the areas of telephony and digital communication systems/subsystems. Following an NSF postdoctoral fellowship at Foreign Centers of Excellence, LSS/University of Orsay, Paris, France, he joined the Laboratory for Information and Decision Systems, MIT, Cambridge, MA as a Research Scientist, performing and supervising research. He is presently Professor of Electrical Engineering in the ECE Department, North Carolina State University, Raleigh, leading the Vision, Information and Statistical Signal Theories and Applications Laboratory. His research interests are in statistical signal and image analysis and mathematical modeling with a keen emphasis on applied problems in classification and recognition using geometric and topological tools. His research work has been funded by many Federal and Industrial agencies, including a NSF Career award. He has served on the IEEE editorial board of SP, and the TCs of SPTM and Big Data Initiative, as well as an AE of the new IEEE Transactions on SP on Information Processing on Networks, and of the IEEE SP Magazine. He is also one of the 2015-2016 Distinguished Lecturers of the IEEE SP Society.