We study uniform consistency in nonparametric mixture models as well as closely related mixture of regression (also known as mixed regression) models, where the regression functions are allowed to be nonparametric and the error distributions are assumed to be convolutions of a Gaussian density. We construct uniformly consistent estimators under general conditions while simultaneously highlighting several pain points in extending existing pointwise consistency results to uniform results. The resulting analysis turns out to be nontrivial, and several novel technical tools are developed along the way. In the case of mixed regression, we prove $L^1$ convergence of the regression functions while allowing for the component regression functions to intersect arbitrarily often, which presents additional technical challenges. We also consider generalizations to general (i.e. non-convolutional) nonparametric mixtures.
Analysis of repeated measurements for a sample of subjects has been intensively studied with several important branches developed, including longitudinal/panel/functional data analysis, while nonparametric regression of the mean function serves as a cornerstone that many statistical models are built upon. In this work, we investigate this problem using fully connected deep neural network (DNN) estimators with flexible shapes. A comprehensive theoretical framework is established by adopting empirical process techniques to tackle clustered dependence. We then derive the nearly optimal convergence rate of the DNN estimators in H\"older smoothness space, and illustrate the phase transition phenomenon inherent to repeated measurements and its connection to the curse of dimensionality. Furthermore, we study the function spaces with low intrinsic dimensions, including the hierarchical composition model, anisotropic H\"older smoothness and low-dimensional support set, and also obtain new approximation results and matching lower bounds to demonstrate the adaptivity of the DNN estimators for circumventing the curse of dimensionality.
This article proposes copula-based dependence quantification between multiple groups of random variables of possibly different sizes via the family of $Phi$-divergences. An axiomatic framework for this purpose is provided, after which we focus on the absolutely continuous setting assuming copula densities exist. We consider parametric and semi-parametric frameworks, discuss estimation procedures, and report on asymptotic properties of the proposed estimators. In particular, we first concentrate on a Gaussian copula approach yielding explicit and attractive dependence coefficients for specific choices of $Phi$, which are more amenable for estimation. Next, general parametric copula families are considered, with special attention to nested Archimedean copulas, being a natural choice for dependence modelling of random vectors. The results are illustrated by means of examples. Simulations and a real-world application on financial data are provided as well.
The additive model is a popular nonparametric regression method due to its ability to retain modeling flexibility while avoiding the curse of dimensionality. The backfitting algorithm is an intuitive and widely used numerical approach for fitting additive models. However, its application to large datasets may incur a high computational cost and is thus infeasible in practice. To address this problem, we propose a novel approach called independence-encouraging subsampling (IES) to select a subsample from big data for training additive models. Inspired by the minimax optimality of an orthogonal array (OA) due to its pairwise independent predictors and uniform coverage for the range of each predictor, the IES approach selects a subsample that approximates an OA to achieve the minimax optimality. Our asymptotic analyses demonstrate that an IES subsample converges to an OA and that the backfitting algorithm over the subsample converges to a unique solution even if the predictors are highly dependent in the original big data. The proposed IES method is also shown to be numerically appealing via simulations and a real data application.
The spectral density matrix is a fundamental object of interest in time series analysis, and it encodes both contemporary and dynamic linear relationships between component processes of the multivariate system. In this paper we develop novel inference procedures for the spectral density matrix in the high-dimensional setting. Specifically, we introduce a new global testing procedure to test the nullity of the cross-spectral density for a given set of frequencies and across pairs of component indices. For the first time, both Gaussian approximation and parametric bootstrap methodologies are employed to conduct inference for a high-dimensional parameter formulated in the frequency domain, and new technical tools are developed to provide asymptotic guarantees of the size accuracy and power for global testing. We further propose a multiple testing procedure for simultaneously testing the nullity of the cross-spectral density at a given set of frequencies. The method is shown to control the false discovery rate. Both numerical simulations and a real data illustration demonstrate the usefulness of the proposed testing methods.
In many psychometric applications, the relationship between the mean of an outcome and a quantitative covariate is too complex to be described by simple parametric functions; instead, flexible nonlinear relationships can be incorporated using penalized splines. Penalized splines can be conveniently represented as a linear mixed effects model (LMM), where the coefficients of the spline basis functions are random effects. The LMM representation of penalized splines makes the extension to multivariate outcomes relatively straightforward. In the LMM, no effect of the quantitative covariate on the outcome corresponds to the null hypothesis that a fixed effect and a variance component are both zero. Under the null, the usual asymptotic chi-square distribution of the likelihood ratio test for the variance component does not hold. Therefore, we propose three permutation tests for the likelihood ratio test statistic: one based on permuting the quantitative covariate, the other two based on permuting residuals. We compare via simulation the Type I error rate and power of the three permutation tests obtained from joint models for multiple outcomes, as well as a commonly used parametric test. The tests are illustrated using data from a stimulant use disorder psychosocial clinical trial.
Discrete data are abundant and often arise as counts or rounded data. These data commonly exhibit complex distributional features such as zero-inflation, over-/under-dispersion, boundedness, and heaping, which render many parametric models inadequate. Yet even for parametric regression models, approximations such as MCMC typically are needed for posterior inference. This paper introduces a Bayesian modeling and algorithmic framework that enables semiparametric regression analysis for discrete data with Monte Carlo (not MCMC) sampling. The proposed approach pairs a nonparametric marginal model with a latent linear regression model to encourage both flexibility and interpretability, and delivers posterior consistency even under model misspecification. For a parametric or large-sample approximation of this model, we identify a class of conjugate priors with (pseudo) closed-form posteriors. All posterior and predictive distributions are available analytically or via direct Monte Carlo sampling. These tools are broadly useful for linear regression, nonlinear models via basis expansions, and variable selection with discrete data. Simulation studies demonstrate significant advantages in computing, prediction, estimation, and selection relative to existing alternatives. This novel approach is applied successfully to self-reported mental health data that exhibit zero-inflation, overdispersion, boundedness, and heaping.
Non-asymptotic statistical analysis is often missing for modern geometry-aware machine learning algorithms due to the possibly intricate non-linear manifold structure. This paper studies an intrinsic mean model on the manifold of restricted positive semi-definite matrices and provides a non-asymptotic statistical analysis of the Karcher mean. We also consider a general extrinsic signal-plus-noise model, under which a deterministic error bound of the Karcher mean is provided. As an application, we show that the distributed principal component analysis algorithm, LRC-dPCA, achieves the same performance as the full sample PCA algorithm. Numerical experiments lend strong support to our theories.
Specifying reward functions for complex tasks like object manipulation or driving is challenging to do by hand. Reward learning seeks to address this by learning a reward model using human feedback on selected query policies. This shifts the burden of reward specification to the optimal design of the queries. We propose a theoretical framework for studying reward learning and the associated optimal experiment design problem. Our framework models rewards and policies as nonparametric functions belonging to subsets of Reproducing Kernel Hilbert Spaces (RKHSs). The learner receives (noisy) oracle access to a true reward and must output a policy that performs well under the true reward. For this setting, we first derive non-asymptotic excess risk bounds for a simple plug-in estimator based on ridge regression. We then solve the query design problem by optimizing these risk bounds with respect to the choice of query set and obtain a finite sample statistical rate, which depends primarily on the eigenvalue spectrum of a certain linear operator on the RKHSs. Despite the generality of these results, our bounds are stronger than previous bounds developed for more specialized problems. We specifically show that the well-studied problem of Gaussian process (GP) bandit optimization is a special case of our framework, and that our bounds either improve or are competitive with known regret guarantees for the Mat\'ern kernel.
We introduce a novel statistical significance-based approach for clustering hierarchical data using semi-parametric linear mixed-effects models designed for responses with laws in the exponential family (e.g., Poisson and Bernoulli). Within the family of semi-parametric mixed-effects models, a latent clustering structure of the highest-level units can be identified by assuming the random effects to follow a discrete distribution with an unknown number of support points. We achieve this by computing {\alpha}-level confidence regions of the estimated support point and identifying statistically different clusters. At each iteration of a tailored Expectation Maximization algorithm, the two closest estimated support points for which the confidence regions overlap collapse. Unlike the related state-of-the-art methods that rely on arbitrary thresholds to determine the merging of close discrete masses, the proposed approach relies on conventional statistical confidence levels, thereby avoiding the use of discretionary tuning parameters. To demonstrate the effectiveness of our approach, we apply it to data from the Programme for International Student Assessment (PISA - OECD) to cluster countries based on the rate of innumeracy levels in schools. Additionally, a simulation study and comparison with classical parametric and state-of-the-art models are provided and discussed.
In this paper we consider codes in $\mathbb{F}_q^{s\times r}$ with packing radius $R$ regarding the NRT-metric (i.e. when the underlying poset is a disjoint union of chains with the same length) and we establish necessary condition on the parameters $s,r$ and $R$ for the existence of perfect codes. More explicitly, for $r,s\geq 2$ and $R\geq 1$ we prove that if there is a non-trivial perfect code then $(r+1)(R+1)\leq rs$. We also explore a connection to the knapsack problem and establish a correspondence between perfect codes with $r>R$ and those with $r=R$.