The singular value decomposition (SVD) is a crucial tool in machine learning and statistical data analysis. However, it is highly susceptible to outliers in the data matrix. Existing robust SVD algorithms often sacrifice speed for robustness or fail in the presence of only a few outliers. This study introduces an efficient algorithm, called Spherically Normalized SVD, for robust SVD approximation that is highly insensitive to outliers, computationally scalable, and provides accurate approximations of singular vectors. The proposed algorithm achieves remarkable speed by utilizing only two applications of a standard reduced-rank SVD algorithm to appropriately scaled data, significantly outperforming competing algorithms in computation times. To assess the robustness of the approximated singular vectors and their subspaces against data contamination, we introduce new notions of breakdown points for matrix-valued input, including row-wise, column-wise, and block-wise breakdown points. Theoretical and empirical analyses demonstrate that our algorithm exhibits higher breakdown points compared to standard SVD and its modifications. We empirically validate the effectiveness of our approach in applications such as robust low-rank approximation and robust principal component analysis of high-dimensional microarray datasets. Overall, our study presents a highly efficient and robust solution for SVD approximation that overcomes the limitations of existing algorithms in the presence of outliers.
Gary Lorden provided a number of fundamental and novel insights to sequential hypothesis testing and changepoint detection. In this article we provide an overview of Lorden's contributions in the context of existing results in those areas, and some extensions made possible by Lorden's work, mentioning also areas of application including threat detection in physical-computer systems, near-Earth space informatics, epidemiology, clinical trials, and finance.
Feature models have become a de facto standard for representing variability in software product lines. UVL (Universal Variability Language) is a language which expresses the features, dependencies, and constraints between them. This language is written in plain text and follows a syntactic structure that needs to be processed by a parser. This parser is software with specific syntactic rules that the language must comply with to be processed correctly. Researchers have datasets with numerous feature models. The language description form of these feature models is tied to a version of the parser language. When the parser is updated to support new features or correct previous ones, these feature models are often no longer compatible, generating incompatibilities and inconsistency within the dataset. In this paper, we present UVL Sentinel. This tool analyzes a dataset of feature models in UVL format, generating error analysis reports, describing those errors and, eventually, a syntactic processing that applies the most common solutions. This tool can detect the incompatibilities of the feature models of a dataset when the parser is updated and tries to correct the most common syntactic errors, facilitating the management of the dataset and the adaptation of their models to the new version of the parser. Our tool was evaluated using a dataset of 1,479 UVL models from different sources and helped semi-automatically fix 185 warnings and syntax errors.
Modern, large scale monitoring systems have to process and store vast amounts of log data in near real-time. At query time the systems have to find relevant logs based on the content of the log message using support structures that can scale to these amounts of data while still being efficient to use. We present our novel Compressed Probabilistic Retrieval algorithm (COPR), capable of answering Multi-Set Multi-Membership-Queries, that can be used as an alternative to existing indexing structures for streamed log data. In our experiments, COPR required up to 93% less storage space than the tested state-of-the-art inverted index and had up to four orders of magnitude less false-positives than the tested state-of-the-art membership sketch. Additionally, COPR achieved up to 250 times higher query throughput than the tested inverted index and up to 240 times higher query throughput than the tested membership sketch.
Random Forest (RF) is well-known as an efficient ensemble learning method in terms of predictive performance. It is also considered a Black Box because of its hundreds of deep decision trees. This lack of interpretability can be a real drawback for acceptance of RF models in several real-world applications, especially those affecting one's lives, such as in healthcare, security, and law. In this work, we present Forest-ORE, a method that makes RF interpretable via an optimized rule ensemble (ORE) for local and global interpretation. Unlike other rule-based approaches aiming at interpreting the RF model, this method simultaneously considers several parameters that influence the choice of an interpretable rule ensemble. Existing methods often prioritize predictive performance over interpretability coverage and do not provide information about existing overlaps or interactions between rules. Forest-ORE uses a mixed-integer optimization program to build an ORE that considers the trade-off between predictive performance, interpretability coverage, and model size (size of the rule ensemble, rule lengths, and rule overlaps). In addition to providing an ORE competitive in predictive performance with RF, this method enriches the ORE through other rules that afford complementary information. It also enables monitoring of the rule selection process and delivers various metrics that can be used to generate a graphical representation of the final model. This framework is illustrated through an example, and its robustness is assessed through 36 benchmark datasets. A comparative analysis of well-known methods shows that Forest-ORE provides an excellent trade-off between predictive performance, interpretability coverage, and model size.
Transfer learning (TL) has emerged as a powerful tool to supplement data collected for a target task with data collected for a related source task. The Bayesian framework is natural for TL because information from the source data can be incorporated in the prior distribution for the target data analysis. In this paper, we propose and study Bayesian TL methods for the normal-means problem and multiple linear regression. We propose two classes of prior distributions. The first class assumes the difference in the parameters for the source and target tasks is sparse, i.e., many parameters are shared across tasks. The second assumes that none of the parameters are shared across tasks, but the differences are bounded in $\ell_2$-norm. For the sparse case, we propose a Bayes shrinkage estimator with theoretical guarantees under mild assumptions. The proposed methodology is tested on synthetic data and outperforms state-of-the-art TL methods. We then use this method to fine-tune the last layer of a neural network model to predict the molecular gap property in a material science application. We report improved performance compared to classical fine tuning and methods using only the target data.
Quantifying the heterogeneity is an important issue in meta-analysis, and among the existing measures, the $I^2$ statistic is the most commonly used measure in the literature. In this paper, we show that the $I^2$ statistic was, in fact, defined as problematic or even completely wrong from the very beginning. To confirm this statement, we first present a motivating example to show that the $I^2$ statistic is heavily dependent on the study sample sizes, and consequently it may yield contradictory results for the amount of heterogeneity. Moreover, by drawing a connection between ANOVA and meta-analysis, the $I^2$ statistic is shown to have, mistakenly, applied the sampling errors of the estimators rather than the variances of the study populations. Inspired by this, we introduce an Intrinsic measure for Quantifying the heterogeneity in meta-analysis, and meanwhile study its statistical properties to clarify why it is superior to the existing measures. We further propose an optimal estimator, referred to as the IQ statistic, for the new measure of heterogeneity that can be readily applied in meta-analysis. Simulations and real data analysis demonstrate that the IQ statistic provides a nearly unbiased estimate of the true heterogeneity and it is also independent of the study sample sizes.
Deep generative models aim to learn the underlying distribution of data and generate new ones. Despite the diversity of generative models and their high-quality generation performance in practice, most of them lack rigorous theoretical convergence proofs. In this work, we aim to establish some convergence results for OT-Flow, one of the deep generative models. First, by reformulating the framework of OT-Flow model, we establish the $\Gamma$-convergence of the formulation of OT-flow to the corresponding optimal transport (OT) problem as the regularization term parameter $\alpha$ goes to infinity. Second, since the loss function will be approximated by Monte Carlo method in training, we established the convergence between the discrete loss function and the continuous one when the sample number $N$ goes to infinity as well. Meanwhile, the approximation capability of the neural network provides an upper bound for the discrete loss function of the minimizers. The proofs in both aspects provide convincing assurances for OT-Flow.
PECR is a formal system designed to explore the properties of computability of programs on a real-world computer. As such PECR incorporates the finite resources of the machine upon which a program is to be executed. The main features of the formal system will be presented and its practical applications will be discussed. Of particular interest is the implementation of the formal system to the exploration of the laws of nature that lead to rigorous constructions of computer models of real-world phenomena.
Multiple Instance Learning (MIL) is a weakly supervised paradigm that has been successfully applied to many different scientific areas and is particularly well suited to medical imaging. Probabilistic MIL methods, and more specifically Gaussian Processes (GPs), have achieved excellent results due to their high expressiveness and uncertainty quantification capabilities. One of the most successful GP-based MIL methods, VGPMIL, resorts to a variational bound to handle the intractability of the logistic function. Here, we formulate VGPMIL using P\'olya-Gamma random variables. This approach yields the same variational posterior approximations as the original VGPMIL, which is a consequence of the two representations that the Hyperbolic Secant distribution admits. This leads us to propose a general GP-based MIL method that takes different forms by simply leveraging distributions other than the Hyperbolic Secant one. Using the Gamma distribution we arrive at a new approach that obtains competitive or superior predictive performance and efficiency. This is validated in a comprehensive experimental study including one synthetic MIL dataset, two well-known MIL benchmarks, and a real-world medical problem. We expect that this work provides useful ideas beyond MIL that can foster further research in the field.
The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.