Many statistical settings call for estimating a population parameter, most typically the population mean, based on a sample of matrices. The most natural estimate of the population mean is the arithmetic mean, but there are many other matrix means that may behave differently, especially in high dimensions. Here we consider the matrix harmonic mean as an alternative to the arithmetic matrix mean. We show that in certain high-dimensional regimes, the harmonic mean yields an improvement over the arithmetic mean in estimation error as measured by the operator norm. Counter-intuitively, studying the asymptotic behavior of these two matrix means in a spiked covariance estimation problem, we find that this improvement in operator norm error does not imply better recovery of the leading eigenvector. We also show that a Rao-Blackwellized version of the harmonic mean is equivalent to a linear shrinkage estimator studied previously in the high-dimensional covariance estimation literature, while applying a similar Rao-Blackwellization to regularized sample covariance matrices yields a novel nonlinear shrinkage estimator. Simulations complement the theoretical results, illustrating the conditions under which the harmonic matrix mean yields an empirically better estimate.
Sketching is a stochastic dimension reduction method that preserves geometric structures of data and has applications in high-dimensional regression, low rank approximation and graph sparsification. In this work, we show that sketching can be used to compress simulation data and still accurately estimate time autocorrelation and power spectral density. For a given compression ratio, the accuracy is much higher than using previously known methods. In addition to providing theoretical guarantees, we apply sketching to a molecular dynamics simulation of methanol and find that the estimate of spectral density is 90% accurate using only 10% of the data.
Differential Granger causality, that is understanding how Granger causal relations differ between two related time series, is of interest in many scientific applications. Modeling each time series by a vector autoregressive (VAR) model, we propose a new method to directly learn the difference between the corresponding transition matrices in high dimensions. Key to the new method is an estimating equation constructed based on the Yule-Walker equation that links the difference in transition matrices to the difference in the corresponding precision matrices. In contrast to separately estimating each transition matrix and then calculating the difference, the proposed direct estimation method only requires sparsity of the difference of the two VAR models, and hence allows hub nodes in each high-dimensional time series. The direct estimator is shown to be consistent in estimation and support recovery under mild assumptions. These results also lead to novel consistency results with potentially faster convergence rates for estimating differences between precision matrices of i.i.d observations under weaker assumptions than existing results. We evaluate the finite sample performance of the proposed method using simulation studies and an application to electroencephalogram (EEG) data.
Given two sets $S$ and $T$ of points in the plane, of total size $n$, a {many-to-many} matching between $S$ and $T$ is a set of pairs $(p,q)$ such that $p\in S$, $q\in T$ and for each $r\in S\cup T$, $r$ appears in at least one such pair. The {cost of a pair} $(p,q)$ is the (Euclidean) distance between $p$ and $q$. In the {minimum-cost many-to-many matching} problem, the goal is to compute a many-to-many matching such that the sum of the costs of the pairs is minimized. This problem is a restricted version of minimum-weight edge cover in a bipartite graph, and hence can be solved in $O(n^3)$ time. In a more restricted setting where all the points are on a line, the problem can be solved in $O(n\log n)$ time [Colannino, Damian, Hurtado, Langerman, Meijer, Ramaswami, Souvaine, Toussaint; Graphs Comb., 2007]. However, no progress has been made in the general planar case in improving the cubic time bound. In this paper, we obtain an $O(n^2\cdot poly(\log n))$ time exact algorithm and an $O( n^{3/2}\cdot poly(\log n))$ time $(1+\epsilon)$-approximation in the planar case. Our results affirmatively address an open problem posed in [Colannino et al., Graphs Comb., 2007].
We consider the sparse principal component analysis for high-dimensional stationary processes. The standard principal component analysis performs poorly when the dimension of the process is large. We establish the oracle inequalities for penalized principal component estimators for the processes including heavy-tailed time series. The rate of convergence of the estimators is established. We also elucidate the theoretical rate for choosing the tuning parameter in penalized estimators. The performance of the sparse principal component analysis is demonstrated by numerical simulations. The utility of the sparse principal component analysis for time series data is exemplified by the application to average temperature data.
We establish higher-order expansions for a difference between probability distributions of sums of i.i.d. random vectors in a Euclidean space. The derived bounds are uniform over two classes of sets: the set of all Euclidean balls and the set of all half-spaces. These results allow to account for an impact of higher-order moments or cumulants of the considered distributions; the obtained error terms depend on a sample size and a dimension explicitly. The new inequalities outperform accuracy of the normal approximation in existing Berry--Esseen inequalities under very general conditions. For symmetrically distributed random summands, the obtained results are optimal in terms of the ratio between the dimension and the sample size. The new technique which we developed for establishing nonasymptotic higher-order expansions can be interesting by itself. Using the new higher-order inequalities, we study accuracy of the nonparametric bootstrap approximation and propose a bootstrap score test under possible model misspecification. The results of the paper also include explicit error bounds for general elliptical confidence regions for an expected value of the random summands, and optimality of the Gaussian anti-concentration inequality over the set of all Euclidean balls.
In this paper, we study $k$-Way Min-cost Perfect Matching with Delays - the $k$-MPMD problem. This problem considers a metric space with $n$ nodes. Requests arrive at these nodes in an online fashion. The task is to match these requests into sets of exactly $k$, such that the space and time cost of all matched requests are minimized. The notion of the space cost requires a definition of an underlying metric space that gives distances of subsets of $k$ elements. For $k>2$, the task of finding a suitable metric space is at the core of our problem: We show that for some known generalizations to $k=3$ points, such as the $2$-metric and the $D$-metric, there exists no competitive randomized algorithm for the $3$-MPMD problem. The $G$-metrics are defined for 3 points and allows for a competitive algorithm for the $3$-MPMD problem. For $k>3$ points, there exist two generalizations of the $G$-metrics known as $n$- and $K$-metrics. We show that neither the $n$-metrics nor the $K$-metrics can be used for the $k$-MPMD problem. On the positive side, we introduce the $H$-metrics, the first metrics to allow for a solution of the $k$-MPMD problem for all $k$. In order to devise an online algorithm for the $k$-MPMD problem on the $H$-metrics, we embed the $H$-metric into trees with an $O(\log n)$ distortion. Based on this embedding result, we extend the algorithm proposed by Azar et al. (2017) and achieve a competitive ratio of $O(\log n)$ for the $k$-MPMD problem.
Choosing a suitable algorithm from the myriads of different search heuristics is difficult when faced with a novel optimization problem. In this work, we argue that the purely academic question of what could be the best possible algorithm in a certain broad class of black-box optimizers can give fruitful indications in which direction to search for good established optimization heuristics. We demonstrate this approach on the recently proposed DLB benchmark, for which the only known results are $O(n^3)$ runtimes for several classic evolutionary algorithms and an $O(n^2 \log n)$ runtime for an estimation-of-distribution algorithm. Our finding that the unary unbiased black-box complexity is only $O(n^2)$ suggests the Metropolis algorithm as an interesting candidate and we prove that it solves the DLB problem in quadratic time. Since we also prove that better runtimes cannot be obtained in the class of unary unbiased algorithms, we shift our attention to algorithms that use the information of more parents to generate new solutions. An artificial algorithm of this type having an $O(n \log n)$ runtime leads to the result that the significance-based compact genetic algorithm (sig-cGA) can solve the DLB problem also in time $O(n \log n)$. Our experiments show a remarkably good performance of the Metropolis algorithm, clearly the best of all algorithms regarded for reasonable problem sizes.
We consider the estimation of densities in multiple subpopulations, where the available sample size in each subpopulation greatly varies. This problem occurs in epidemiology, for example, where different diseases may share similar pathogenic mechanism but differ in their prevalence. Without specifying a parametric form, our proposed method pools information from the population and estimate the density in each subpopulation in a data-driven fashion. Drawing from functional data analysis, low-dimensional approximating density families in the form of exponential families are constructed from the principal modes of variation in the log-densities. Subpopulation densities are subsequently fitted in the approximating families based on likelihood principles and shrinkage. The approximating families increase in their flexibility as the number of components increases and can approximate arbitrary infinite-dimensional densities. We also derive convergence results of the density estimates with discrete observations. The proposed methods are shown to be interpretable and efficient in simulation as well as applications to electronic medical record and rainfall data.
The problem of Approximate Nearest Neighbor (ANN) search is fundamental in computer science and has benefited from significant progress in the past couple of decades. However, most work has been devoted to pointsets whereas complex shapes have not been sufficiently treated. Here, we focus on distance functions between discretized curves in Euclidean space: they appear in a wide range of applications, from road segments to time-series in general dimension. For $\ell_p$-products of Euclidean metrics, for any $p$, we design simple and efficient data structures for ANN, based on randomized projections, which are of independent interest. They serve to solve proximity problems under a notion of distance between discretized curves, which generalizes both discrete Fr\'echet and Dynamic Time Warping distances. These are the most popular and practical approaches to comparing such curves. We offer the first data structures and query algorithms for ANN with arbitrarily good approximation factor, at the expense of increasing space usage and preprocessing time over existing methods. Query time complexity is comparable or significantly improved by our algorithms, our algorithm is especially efficient when the length of the curves is bounded.
This paper describes a suite of algorithms for constructing low-rank approximations of an input matrix from a random linear image of the matrix, called a sketch. These methods can preserve structural properties of the input matrix, such as positive-semidefiniteness, and they can produce approximations with a user-specified rank. The algorithms are simple, accurate, numerically stable, and provably correct. Moreover, each method is accompanied by an informative error bound that allows users to select parameters a priori to achieve a given approximation quality. These claims are supported by numerical experiments with real and synthetic data.