We consider the extreme eigenvalues of the sample covariance matrix $Q=YY^*$ under the generalized elliptical model that $Y=\Sigma^{1/2}XD.$ Here $\Sigma$ is a bounded $p \times p$ positive definite deterministic matrix representing the population covariance structure, $X$ is a $p \times n$ random matrix containing either independent columns sampled from the unit sphere in $\mathbb{R}^p$ or i.i.d. centered entries with variance $n^{-1},$ and $D$ is a diagonal random matrix containing i.i.d. entries and independent of $X.$ Such a model finds important applications in statistics and machine learning. In this paper, assuming that $p$ and $n$ are comparably large, we prove that the extreme edge eigenvalues of $Q$ can have several types of distributions depending on $\Sigma$ and $D$ asymptotically. These distributions include: Gumbel, Fr\'echet, Weibull, Tracy-Widom, Gaussian and their mixtures. On the one hand, when the random variables in $D$ have unbounded support, the edge eigenvalues of $Q$ can have either Gumbel or Fr\'echet distribution depending on the tail decay property of $D.$ On the other hand, when the random variables in $D$ have bounded support, under some mild regularity assumptions on $\Sigma,$ the edge eigenvalues of $Q$ can exhibit Weibull, Tracy-Widom, Gaussian or their mixtures. Based on our theoretical results, we consider two important applications. First, we propose some statistics and procedure to detect and estimate the possible spikes for elliptically distributed data. Second, in the context of a factor model, by using the multiplier bootstrap procedure via selecting the weights in $D,$ we propose a new algorithm to infer and estimate the number of factors in the factor model. Numerical simulations also confirm the accuracy and powerfulness of our proposed methods and illustrate better performance compared to some existing methods in the literature.
Assessing causal effects in the presence of unmeasured confounding is a challenging problem. Although auxiliary variables, such as instrumental variables, are commonly used to identify causal effects, they are often unavailable in practice due to stringent and untestable conditions. To address this issue, previous researches have utilized linear structural equation models to show that the causal effect can be identifiable when noise variables of the treatment and outcome are both non-Gaussian. In this paper, we investigate the problem of identifying the causal effect using auxiliary covariates and non-Gaussianity from the treatment. Our key idea is to characterize the impact of unmeasured confounders using an observed covariate, assuming they are all Gaussian. The auxiliary covariate can be an invalid instrument or an invalid proxy variable. We demonstrate that the causal effect can be identified using this measured covariate, even when the only source of non-Gaussianity comes from the treatment. We then extend the identification results to the multi-treatment setting and provide sufficient conditions for identification. Based on our identification results, we propose a simple and efficient procedure for calculating causal effects and show the $\sqrt{n}$-consistency of the proposed estimator. Finally, we evaluate the performance of our estimator through simulation studies and an application.
Functional magnetic resonance imaging (fMRI) data contain high levels of noise and artifacts. To avoid contamination of downstream analyses, fMRI-based studies must identify and remove these noise sources prior to statistical analysis. One common approach is the "scrubbing" of fMRI volumes that are thought to contain high levels of noise. However, existing scrubbing techniques are based on ad hoc measures of signal change. We consider scrubbing via outlier detection, where volumes containing artifacts are considered multidimensional outliers. Robust multivariate outlier detection methods are proposed using robust distances (RDs), which are related to the Mahalanobis distance. These RDs have a known distribution when the data are i.i.d. normal, and that distribution can be used to determine a threshold for outliers where fMRI data violate these assumptions. Here, we develop a robust multivariate outlier detection method that is applicable to non-normal data. The objective is to obtain threshold values to flag outlying volumes based on their RDs. We propose two threshold candidates that embark on the same two steps, but the choice of which depends on a researcher's purpose. Our main steps are dimension reduction and selection, robust univariate outlier imputation to get rid of the effect of outliers on the distribution, and estimating an outlier threshold based on the upper quantile of the RD distribution without outliers. The first threshold candidate is an upper quantile of the empirical distribution of RDs obtained from the imputed data. The second threshold candidate calculates the upper quantile of the RD distribution that a nonparametric bootstrap uses to account for uncertainty in the empirical quantile. We compare our proposed fMRI scrubbing method to motion scrubbing, data-driven scrubbing, and restrictive parametric multivariate outlier detection methods.
The Naive Bayesian classifier is a popular classification method employing the Bayesian paradigm. The concept of having conditional dependence among input variables sounds good in theory but can lead to a majority vote style behaviour. Achieving conditional independence is often difficult, and they introduce decision biases in the estimates. In Naive Bayes, certain features are called independent features as they have no conditional correlation or dependency when predicting a classification. In this paper, we focus on the optimal partition of features by proposing a novel technique called the Comonotone-Independence Classifier (CIBer) which is able to overcome the challenges posed by the Naive Bayes method. For different datasets, we clearly demonstrate the efficacy of our technique, where we achieve lower error rates and higher or equivalent accuracy compared to models such as Random Forests and XGBoost.
Experimental data is often comprised of variables measured independently, at different sampling rates (non-uniform ${\Delta}$t between successive measurements); and at a specific time point only a subset of all variables may be sampled. Approaches to identifying dynamical systems from such data typically use interpolation, imputation or subsampling to reorganize or modify the training data $\textit{prior}$ to learning. Partial physical knowledge may also be available $\textit{a priori}$ (accurately or approximately), and data-driven techniques can complement this knowledge. Here we exploit neural network architectures based on numerical integration methods and $\textit{a priori}$ physical knowledge to identify the right-hand side of the underlying governing differential equations. Iterates of such neural-network models allow for learning from data sampled at arbitrary time points $\textit{without}$ data modification. Importantly, we integrate the network with available partial physical knowledge in "physics informed gray-boxes"; this enables learning unknown kinetic rates or microbial growth functions while simultaneously estimating experimental parameters.
We consider the problem of online interval scheduling on a single machine, where intervals arrive online in an order chosen by an adversary, and the algorithm must output a set of non-conflicting intervals. Traditionally in scheduling theory, it is assumed that intervals arrive in order of increasing start times. We drop that assumption and allow for intervals to arrive in any possible order. We call this variant any-order interval selection (AOIS). We assume that some online acceptances can be revoked, but a feasible solution must always be maintained. For unweighted intervals and deterministic algorithms, this problem is unbounded. Under the assumption that there are at most $k$ different interval lengths, we give a simple algorithm that achieves a competitive ratio of $2k$ and show that it is optimal amongst deterministic algorithms, and a restricted class of randomized algorithms we call memoryless, contributing to an open question by Adler and Azar 2003; namely whether a randomized algorithm without access to history can achieve a constant competitive ratio. We connect our model to the problem of call control on the line, and show how the algorithms of Garay et al. 1997 can be applied to our setting, resulting in an optimal algorithm for the case of proportional weights. We also discuss the case of intervals with arbitrary weights, and show how to convert the single-length algorithm of Fung et al. 2014 into a classify and randomly select algorithm that achieves a competitive ratio of 2k. Finally, we consider the case of intervals arriving in a random order, and show that for single-lengthed instances, a one-directional algorithm (i.e. replacing intervals in one direction), is the only deterministic memoryless algorithm that can possibly benefit from random arrivals. Finally, we briefly discuss the case of intervals with arbitrary weights.
The new regulatory framework proposal on Artificial Intelligence (AI) published by the European Commission establishes a new risk-based legal approach. The proposal highlights the need to develop adequate risk assessments for the different uses of AI. This risk assessment should address, among others, the detection and mitigation of bias in AI. In this work we analyze statistical approaches to measure biases in automatic decision-making systems. We focus our experiments in face recognition technologies. We propose a novel way to measure the biases in machine learning models using a statistical approach based on the N-Sigma method. N-Sigma is a popular statistical approach used to validate hypotheses in general science such as physics and social areas and its application to machine learning is yet unexplored. In this work we study how to apply this methodology to develop new risk assessment frameworks based on bias analysis and we discuss the main advantages and drawbacks with respect to other popular statistical tests.
We consider a causal inference model in which individuals interact in a social network and they may not comply with the assigned treatments. In particular, we suppose that the form of network interference is unknown to researchers. To estimate meaningful causal parameters in this situation, we introduce a new concept of exposure mapping, which summarizes potentially complicated spillover effects into a fixed dimensional statistic of instrumental variables. We investigate identification conditions for the intention-to-treat effects and the average treatment effects for compliers, while explicitly considering the possibility of misspecification of exposure mapping. Based on our identification results, we develop nonparametric estimation procedures via inverse probability weighting. Their asymptotic properties, including consistency and asymptotic normality, are investigated using an approximate neighborhood interference framework. For an empirical illustration, we apply our method to experimental data on the anti-conflict intervention school program. The proposed methods are readily available with a companion R package.
This paper considers the Westervelt equation, one of the most widely used models in nonlinear acoustics, and seeks to recover two spatially-dependent parameters of physical importance from time-trace boundary measurements. Specifically, these are the nonlinearity parameter $\kappa(x)$ often referred to as $B/A$ in the acoustics literature and the wave speed $c_0(x)$. The determination of the spatial change in these quantities can be used as a means of imaging. We consider identifiability from one or two boundary measurements as relevant in these applications. For a reformulation of the problem in terms of the squared slowness $\mathfrak{s}=1/c_0^2$ and the combined coefficient $\eta=\frac{B/A+2}{\varrho_0 c_0^4}$ we devise a frozen Newton method and prove its convergence. The effectiveness (and limitations) of this iterative scheme are demonstrated by numerical examples.
We propose a novel nonparametric regression framework subject to the positive definiteness constraint. It offers a highly modular approach for estimating covariance functions of stationary processes. Our method can impose positive definiteness, as well as isotropy and monotonicity, on the estimators, and its hyperparameters can be decided using cross validation. We define our estimators by taking integral transforms of kernel-based distribution surrogates. We then use the iterated density estimation evolutionary algorithm, a variant of estimation of distribution algorithms, to fit the estimators. We also extend our method to estimate covariance functions for point-referenced data. Compared to alternative approaches, our method provides more reliable estimates for long-range dependence. Several numerical studies are performed to demonstrate the efficacy and performance of our method. Also, we illustrate our method using precipitation data from the Spatial Interpolation Comparison 97 project.
Given a random sample from a multivariate normal distribution whose covariance matrix is a Toeplitz matrix, we study the largest off-diagonal entry of the sample correlation matrix. Assuming the multivariate normal distribution has the covariance structure of an auto-regressive sequence, we establish a phase transition in the limiting distribution of the largest off-diagonal entry. We show that the limiting distributions are of Gumbel-type (with different parameters) depending on how large or small the parameter of the autoregressive sequence is. At the critical case, we obtain that the limiting distribution is the maximum of two independent random variables of Gumbel distributions. This phase transition establishes the exact threshold at which the auto-regressive covariance structure behaves differently than its counterpart with the covariance matrix equal to the identity. Assuming the covariance matrix is a general Toeplitz matrix, we obtain the limiting distribution of the largest entry under the ultra-high dimensional settings: it is a weighted sum of two independent random variables, one normal and the other following a Gumbel-type law. The counterpart of the non-Gaussian case is also discussed. As an application, we study a high-dimensional covariance testing problem.