This study clarifies the proper criteria to assess the modeling capacity of a general tensor model. The work analyze the problem based on the study of tensor ranks, which is not a well-defined quantity for higher order tensors. To process, the author introduces the separability issue to discuss the Cannikin's law of tensor modeling. Interestingly, a connection between entanglement studied in information theory and tensor analysis is established, shedding new light on the theoretical understanding for modeling capacity problems.
We establish the following two main results on order types of points in general position in the plane (realizable simple planar order types, realizable uniform acyclic oriented matroids of rank $3$): (a) The number of extreme points in an $n$-point order type, chosen uniformly at random from all such order types, is on average $4+o(1)$. For labeled order types, this number has average $4- \frac{8}{n^2 - n +2}$ and variance at most $3$. (b) The (labeled) order types read off a set of $n$ points sampled independently from the uniform measure on a convex planar domain, smooth or polygonal, or from a Gaussian distribution are concentrated, i.e. such sampling typically encounters only a vanishingly small fraction of all order types of the given size. Result (a) generalizes to arbitrary dimension $d$ for labeled order types with the average number of extreme points $2d+o(1)$ and constant variance. We also discuss to what extent our methods generalize to the abstract setting of uniform acyclic oriented matroids. Moreover, our methods allow to show the following relative of the Erd\H{o}s-Szekeres theorem: for any fixed $k$, as $n \to \infty$, a proportion $1 - O(1/n)$ of the $n$-point simple order types contain a triangle enclosing a convex $k$-chain over an edge. For the unlabeled case in (a), we prove that for any antipodal, finite subset of the $2$-dimensional sphere, the group of orientation preserving bijections is cyclic, dihedral or one of $A_4$, $S_4$ or $A_5$ (and each case is possible). These are the finite subgroups of $SO(3)$ and our proof follows the lines of their characterization by Felix Klein.
In this paper we introduce a sketching algorithm for constructing a tensor train representation of a probability density from its samples. Our method deviates from the standard recursive SVD-based procedure for constructing a tensor train. Instead we formulate and solve a sequence of small linear systems for the individual tensor train cores. This approach can avoid the curse of dimensionality that threatens both the algorithmic and sample complexities of the recovery problem. Specifically, for Markov models, we prove that the tensor cores can be recovered with a sample complexity that is constant with respect to the dimension. Finally, we illustrate the performance of the method with several numerical experiments.
We consider the problem of kernel classification. Works on kernel regression have shown that the rate of decay of the prediction error with the number of samples for a large class of data-sets is well characterized by two quantities: the capacity and source of the data-set. In this work, we compute the decay rates for the misclassification (prediction) error under the Gaussian design, for data-sets satisfying source and capacity assumptions. We derive the rates as a function of the source and capacity coefficients for two standard kernel classification settings, namely margin-maximizing Support Vector Machines (SVM) and ridge classification, and contrast the two methods. As a consequence, we find that the known worst-case rates are loose for this class of data-sets. Finally, we show that the rates presented in this work are also observed on real data-sets.
Factorization of matrices where the rank of the two factors diverges linearly with their sizes has many applications in diverse areas such as unsupervised representation learning, dictionary learning or sparse coding. We consider a setting where the two factors are generated from known component-wise independent prior distributions, and the statistician observes a (possibly noisy) component-wise function of their matrix product. In the limit where the dimensions of the matrices tend to infinity, but their ratios remain fixed, we expect to be able to derive closed form expressions for the optimal mean squared error on the estimation of the two factors. However, this remains a very involved mathematical and algorithmic problem. A related, but simpler, problem is extensive-rank matrix denoising, where one aims to reconstruct a matrix with extensive but usually small rank from noisy measurements. In this paper, we approach both these problems using high-temperature expansions at fixed order parameters. This allows to clarify how previous attempts at solving these problems failed at finding an asymptotically exact solution. We provide a systematic way to derive the corrections to these existing approximations, taking into account the structure of correlations particular to the problem. Finally, we illustrate our approach in detail on the case of extensive-rank matrix denoising. We compare our results with known optimal rotationally-invariant estimators, and show how exact asymptotic calculations of the minimal error can be performed using extensive-rank matrix integrals.
Practical data assimilation algorithms often contain hyper-parameters, which may arise due to, for instance, the use of certain auxiliary techniques like covariance inflation and localization in an ensemble Kalman filter, the re-parameterization of certain quantities such as model and/or observation error covariance matrices, and so on. Given the richness of the established assimilation algorithms, and the abundance of the approaches through which hyper-parameters are introduced to the assimilation algorithms, one may ask whether it is possible to develop a sound and generic method to efficiently choose various types of (sometimes high-dimensional) hyper-parameters. This work aims to explore a feasible, although likely partial, answer to this question. Our main idea is built upon the notion that a data assimilation algorithm with hyper-parameters can be considered as a parametric mapping that links a set of quantities of interest (e.g., model state variables and/or parameters) to a corresponding set of predicted observations in the observation space. As such, the choice of hyper-parameters can be recast as a parameter estimation problem, in which our objective is to tune the hyper-parameters in such a way that the resulted predicted observations can match the real observations to a good extent. From this perspective, we propose a hyper-parameter estimation workflow and investigate the performance of this workflow in an ensemble Kalman filter. In a series of experiments, we observe that the proposed workflow works efficiently even in the presence of a relatively large amount (up to $10^3$) of hyper-parameters, and exhibits reasonably good and consistent performance under various conditions.
In modern machine learning, users often have to collaborate to learn the distribution of the data. Communication can be a significant bottleneck. Prior work has studied homogeneous users -- i.e., whose data follow the same discrete distribution -- and has provided optimal communication-efficient methods for estimating that distribution. However, these methods rely heavily on homogeneity, and are less applicable in the common case when users' discrete distributions are heterogeneous. Here we consider a natural and tractable model of heterogeneity, where users' discrete distributions only vary sparsely, on a small number of entries. We propose a novel two-stage method named SHIFT: First, the users collaborate by communicating with the server to learn a central distribution; relying on methods from robust statistics. Then, the learned central distribution is fine-tuned to estimate their respective individual distribution. We show that SHIFT is minimax optimal in our model of heterogeneity and under communication constraints. Further, we provide experimental results using both synthetic data and $n$-gram frequency estimation in the text domain, which corroborate its efficiency.
Estimation of a conditional mean (linking a set of features to an outcome of interest) is a fundamental statistical task. While there is an appeal to flexible nonparametric procedures, effective estimation in many classical nonparametric function spaces (e.g., multivariate Sobolev spaces) can be prohibitively difficult -- both statistically and computationally -- especially when the number of features is large. In this paper, we present (penalized) sieve estimators for regression in nonparametric tensor product spaces: These spaces are more amenable to multivariate regression, and allow us to, in-part, avoid the curse of dimensionality. Our estimators can be easily applied to multivariate nonparametric problems and have appealing statistical and computational properties. Moreover, they can effectively leverage additional structures such as feature sparsity. In this manuscript, we give theoretical guarantees, indicating that the predictive performance of our estimators scale favorably in dimension. In addition, we also present numerical examples to compare the finite-sample performance of the proposed estimators with several popular machine learning methods.
Higher-order tensor data are prevailing in a wide range of fields including high-resolution videos, multimodality imaging such as MRI and fMRI scans, commercial networks, engineering such as signal processing, and elsewhere. Tucker decomposition may be the most general low-rank approximation method among versatile decompositions of higher-order tensors owning to its strong compression ability, whilst statistical properties of the induced Tucker tensor factor model (TuTFaM) remains a big challenge and yet critical before it provides justification for applications in machine learning and beyond. Existing theoretical developments mainly focus on the field of time series with the assumption of strong auto-correlation among temporally ordered observations, which is ineffective for independent and weakly dependent tensor observations. Under quite mild assumptions, this article kicks off matricization of raw weakly correlated tensor observations within the TuTFaM setting, and proposes two sets of PCA based estimation procedures, moPCA and its refinement IPmoPCA, the latter of which is enhanced in rate of convergence. We develop their asymptotic behaviors, including mainly convergence rates and asymptotic distributions of estimators of loading matrices, latent tensor factors and signal parts. The theoretical results can reduce to those in low-order tensor factor models in existing literature. The proposed approaches outperform existing auto-covariance based methods for tensor time series in terms of effects of estimation and tensor reconstruction, in both simulation experiments and two real data examples.
We refer by threshold Ornstein-Uhlenbeck to a continuous-time threshold autoregressive process. It follows the Ornstein-Uhlenbeck dynamics when above or below a fixed level, yet at this level (threshold) its coefficients can be discontinuous. We discuss (quasi)-maximum likelihood estimation of the drift parameters, both assuming continuous and discrete time observations. In the ergodic case, we derive consistency and speed of convergence of these estimators in long time and high frequency. Based on these results, we develop a test for the presence of a threshold in the dynamics. Finally, we apply these statistical tools to short-term US interest rates modeling.
In this paper we study colorings (or tilings) of the two-dimensional grid $\mathbb{Z}^2$. A coloring is said to be valid with respect to a set $P$ of $n\times m$ rectangular patterns if all $n\times m$ sub-patterns of the coloring are in $P$. A coloring $c$ is said to be of low complexity with respect to a rectangle if there exist $m,n\in\mathbb{N}$ and a set $P$ of $n\times m$ rectangular patterns such that $c$ is valid with respect to $P$ and $|P|\leq nm$. Open since it was stated in 1997, Nivat's conjecture states that such a coloring is necessarily periodic. If Nivat's conjecture is true, all valid colorings with respect to $P$ such that $|P|\leq mn$ must be periodic. We prove that there exists at least one periodic coloring among the valid ones. We use this result to investigate the tiling problem, also known as the domino problem, which is well known to be undecidable in its full generality. However, we show that it is decidable in the low-complexity setting. Then, we use our result to show that Nivat's conjecture holds for uniformly recurrent configurations. These results also extend to other convex shapes in place of the rectangle.\\ After that, we prove that the $nm$ bound is multiplicatively optimal for the decidability of the domino problem, as for all $\varepsilon>0$ it is undecidable to determine if there exists a valid coloring for a given $m,n\in \mathbb{N}$ and set of rectangular patterns $P$ of size $n\times m$ such that $|P|\leq (1+\varepsilon)nm$. We prove a slightly better bound in the case where $m=n$, as well as constructing aperiodic SFTs of pretty low complexity.\\ This paper is an extended version of a paper published in STACS 2020.