Testing the significance of a variable or group of variables $X$ for predicting a response $Y$, given additional covariates $Z$, is a ubiquitous task in statistics. A simple but common approach is to specify a linear model, and then test whether the regression coefficient for $X$ is non-zero. However, when the model is misspecified, the test may have poor power, for example when $X$ is involved in complex interactions, or lead to many false rejections. In this work we study the problem of testing the model-free null of conditional mean independence, i.e. that the conditional mean of $Y$ given $X$ and $Z$ does not depend on $X$. We propose a simple and general framework that can leverage flexible nonparametric or machine learning methods, such as additive models or random forests, to yield both robust error control and high power. The procedure involves using these methods to perform regressions, first to estimate a form of projection of $Y$ on $X$ and $Z$ using one half of the data, and then to estimate the expected conditional covariance between this projection and $Y$ on the remaining half of the data. While the approach is general, we show that a version of our procedure using spline regression achieves what we show is the minimax optimal rate in this nonparametric testing problem. Numerical experiments demonstrate the effectiveness of our approach both in terms of maintaining Type I error control, and power, compared to several existing approaches.
Testing the equality of the covariance matrices of two high-dimensional samples is a fundamental inference problem in statistics. Several tests have been proposed but they are either too liberal or too conservative when the required assumptions are not satisfied which attests that they are not always applicable in real data analysis. To overcome this difficulty, a normal-reference test is proposed and studied in this paper. It is shown that under some regularity conditions and the null hypothesis, the proposed test statistic and a chi-square-type mixture have the same limiting distribution. It is then justified to approximate the null distribution of the proposed test statistic using that of the chi-square-type mixture. The distribution of the chi-square-type mixture can be well approximated using a three-cumulant matched chi-square-approximation with its approximation parameters consistently estimated from the data. The asymptotic power of the proposed test under a local alternative is also established. Simulation studies and a real data example demonstrate that in terms of size control, the proposed test outperforms the existing competitors substantially.
This work resolves the following question in non-Euclidean statistics: Is it possible to consistently estimate the Fr\'echet mean set of an unknown population distribution, with respect to the Hausdorff metric, when given access to independent identically-distributed samples? Our affirmative answer is based on a careful analysis of the ``relaxed empirical Fr\'echet mean set estimators'' which identify the set of near-minimizers of the empirical Fr\'echet functional and where the amount of ``relaxation'' vanishes as the number of data tends to infinity. Our main theoretical results include exact descriptions of which relaxation rates give weak consistency and which give strong consistency, as well as the construction of a ``two-step estimator'' which (assuming only the finiteness of certain moments and a mild condition on the metric entropy of the underlying metric space) adaptively finds the fastest possible relaxation rate for strongly consistent estimation. Our main practical result is simply that researchers working with non-Euclidean data in the real world can be better off computing relaxed empirical Fr\'echet mean sets rather than unrelaxed empirical Fr\'echet mean sets.
Many applications of representation learning, such as privacy preservation, algorithmic fairness, and domain adaptation, desire explicit control over semantic information being discarded. This goal is formulated as satisfying two objectives: maximizing utility for predicting a target attribute while simultaneously being invariant (independent) to a known semantic attribute. Solutions to invariant representation learning (IRepL) problems lead to a trade-off between utility and invariance when they are competing. While existing works study bounds on this trade-off, two questions remain outstanding: 1) What is the exact trade-off between utility and invariance? and 2) What are the encoders (mapping the data to a representation) that achieve the trade-off, and how can we estimate it from training data? This paper addresses these questions for IRepLs in reproducing kernel Hilbert spaces (RKHS)s. Under the assumption that the distribution of a low-dimensional projection of high-dimensional data is approximately normal, we derive a closed-form solution for the global optima of the underlying optimization problem for encoders in RKHSs. This yields closed formulae for a near-optimal trade-off, corresponding optimal representation dimensionality, and the corresponding encoder(s). We also numerically quantify the trade-off on representative problems and compare them to those achieved by baseline IRepL algorithms.
For random variables produced through the inverse transform method, approximate random variables are introduced, which are produced by approximations to a distribution's inverse cumulative distribution function. These approximations are designed to be computationally inexpensive, and much cheaper than exact library functions, and thus highly suitable for use in Monte Carlo simulations. Two approximations are presented for the Gaussian distribution: a piecewise constant on equally spaced intervals, and a piecewise linear using geometrically decaying intervals. The error of the approximations are bounded and the convergence demonstrated, and the computational savings measured for C and C++ implementations. Implementations tailored for Intel and Arm hardwares are inspected, alongside hardware agnostic implementations built using OpenMP. The savings are incorporated into a nested multilevel Monte Carlo framework with the Euler-Maruyama scheme to exploit the speed ups without losing accuracy, offering speed ups by a factor of 5--7. These ideas are empirically extended to the Milstein scheme, and the Cox-Ingersoll-Ross process' non central chi-squared distribution, which offer speed ups by a factor of 250 or more.
Some families of count distributions do not have a closed form of the probability mass function and/or finite moments and therefore parameter estimation can not be performed with the classical methods. When the probability generating function of the distribution is available, a new approach based on censoring and moment criterion is introduced, where the original distribution is replaced with that censored by using a Geometric distribution. Consistency and asymptotic normality of the resulting estimators are proven under suitable conditions. The crucial issue of selecting the censoring parameter is addressed by means of a data-driven procedure. Finally, this novel approach is applied to the discrete stable family and the finite sample performance of the estimators is assessed by means of a Monte Carlo simulation study.
Given a reference set $R$ of $n$ points and a query set $Q$ of $m$ points in a metric space, this paper studies an important problem of finding $k$-nearest neighbors of every point $q \in Q$ in the set $R$ in a near-linear time. In the paper at ICML 2006, Beygelzimer, Kakade, and Langford introduced a cover tree and attempted to prove that that its construction requires $O(n\log n)$ time while the nearest neighbor search needs $O(n\log m)$ time with a hidden dimensionality factor. In 2015, section~5.3 of Curtin's PhD thesis pointed out that the proof of the latter claim might contain a serious gap in time estimation. In the paper at TopoInVis 2022, the authors built explicit counterexamples for a key step in the proofs of both claims. The past obstacles will be overcome by a simpler compressed cover tree on the reference set $R$. The first new algorithm constructs a compressed cover tree in $O(n \log n)$ time. The second new algorithm finds all $k$-nearest neighbors of all points from $Q$ using a compressed cover tree in time $O(m(k+\log n)\log k)$ with a hidden dimensionality factor depending on point distributions of the sets $R,Q$ but not on their sizes.
In this work, we propose a robust framework that employs adversarially robust training to safeguard the machine learning models against perturbed testing data. We achieve this by incorporating the worst-case additive adversarial error within a fixed budget for each sample during model estimation. Our main focus is to provide a plug-and-play solution that can be incorporated in the existing machine learning algorithms with minimal changes. To that end, we derive the ready-to-use solution for several widely used loss functions with a variety of norm constraints on adversarial perturbation for various supervised and unsupervised ML problems, including regression, classification, two-layer neural networks, graphical models, and matrix completion. The solutions are either in closed-form, 1-D optimization, semidefinite programming, difference of convex programming or a sorting-based algorithm. Finally, we validate our approach by showing significant performance improvement on real-world datasets for supervised problems such as regression and classification, as well as for unsupervised problems such as matrix completion and learning graphical models, with very little computational overhead.
Coordination is a desirable feature in multi-agent systems, allowing the execution of tasks that would be impossible by individual agents. We study coordination by a team of strategic agents choosing to undertake one of the multiple tasks. We adopt a stochastic framework where the agents decide between two distinct tasks whose difficulty is randomly distributed and partially observed. We show that a Nash equilibrium with a simple and intuitive linear structure exists for diffuse prior distributions on the task difficulties. Additionally, we show that the best response of any agent to an affine strategy profile can be nonlinear when the prior distribution is not diffuse. Finally, we state an algorithm that allows us to efficiently compute a data-driven Nash equilibrium within the class of affine policies.
Despite impressive success in many tasks, deep learning models are shown to rely on spurious features, which will catastrophically fail when generalized to out-of-distribution (OOD) data. Invariant Risk Minimization (IRM) is proposed to alleviate this issue by extracting domain-invariant features for OOD generalization. Nevertheless, recent work shows that IRM is only effective for a certain type of distribution shift (e.g., correlation shift) while it fails for other cases (e.g., diversity shift). Meanwhile, another thread of method, Adversarial Training (AT), has shown better domain transfer performance, suggesting that it has the potential to be an effective candidate for extracting domain-invariant features. This paper investigates this possibility by exploring the similarity between the IRM and AT objectives. Inspired by this connection, we propose Domainwise Adversarial Training (DAT), an AT-inspired method for alleviating distribution shift by domain-specific perturbations. Extensive experiments show that our proposed DAT can effectively remove domain-varying features and improve OOD generalization under both correlation shift and diversity shift.
The most useful data mining primitives are distance measures. With an effective distance measure, it is possible to perform classification, clustering, anomaly detection, segmentation, etc. For single-event time series Euclidean Distance and Dynamic Time Warping distance are known to be extremely effective. However, for time series containing cyclical behaviors, the semantic meaningfulness of such comparisons is less clear. For example, on two separate days the telemetry from an athlete workout routine might be very similar. The second day may change the order in of performing push-ups and squats, adding repetitions of pull-ups, or completely omitting dumbbell curls. Any of these minor changes would defeat existing time series distance measures. Some bag-of-features methods have been proposed to address this problem, but we argue that in many cases, similarity is intimately tied to the shapes of subsequences within these longer time series. In such cases, summative features will lack discrimination ability. In this work we introduce PRCIS, which stands for Pattern Representation Comparison in Series. PRCIS is a distance measure for long time series, which exploits recent progress in our ability to summarize time series with dictionaries. We will demonstrate the utility of our ideas on diverse tasks and datasets.