Bifurcating Markov chains (BMC) are Markov chains indexed by a full binary tree representing the evolution of a trait along a population where each individual has two children. Motivated by the functional estimation of the density of the invariant probability measure which appears as the asymptotic distribution of the trait, we prove the consistence and the Gaussian fluctuations for a kernel estimator of this density based on late generations. In this setting, it is interesting to note that the distinction of the three regimes on the ergodic rate identified in a previous work (for fluctuations of average over large generations) disappears. This result is a first step to go beyond the threshold condition on the ergodic rate given in previous statistical papers on functional estimation.
The Chebyshev or $\ell_{\infty}$ estimator is an unconventional alternative to the ordinary least squares in solving linear regressions. It is defined as the minimizer of the $\ell_{\infty}$ objective function \begin{align*} \hat{\boldsymbol{\beta}} := \arg\min_{\boldsymbol{\beta}} \|\boldsymbol{Y} - \mathbf{X}\boldsymbol{\beta}\|_{\infty}. \end{align*} The asymptotic distribution of the Chebyshev estimator under fixed number of covariates were recently studied (Knight, 2020), yet finite sample guarantees and generalizations to high-dimensional settings remain open. In this paper, we develop non-asymptotic upper bounds on the estimation error $\|\hat{\boldsymbol{\beta}}-\boldsymbol{\beta}^*\|_2$ for a Chebyshev estimator $\hat{\boldsymbol{\beta}}$, in a regression setting with uniformly distributed noise $\varepsilon_i\sim U([-a,a])$ where $a$ is either known or unknown. With relatively mild assumptions on the (random) design matrix $\mathbf{X}$, we can bound the error rate by $\frac{C_p}{n}$ with high probability, for some constant $C_p$ depending on the dimension $p$ and the law of the design. Furthermore, we illustrate that there exist designs for which the Chebyshev estimator is (nearly) minimax optimal. In addition we show that "Chebyshev's LASSO" has advantages over the regular LASSO in high dimensional situations, provided that the noise is uniform. Specifically, we argue that it achieves a much faster rate of estimation under certain assumptions on the growth rate of the sparsity level and the ambient dimension with respect to the sample size.
Functional principal component analysis (FPCA) could become invalid when data involve non-Gaussian features. Therefore, we aim to develop a general FPCA method to adapt to such non-Gaussian cases. A Kenall's $\tau$ function, which possesses identical eigenfunctions as covariance function, is constructed. The particular formulation of Kendall's $\tau$ function makes it less insensitive to data distribution. We further apply it to the estimation of FPCA and study the corresponding asymptotic consistency. Moreover, the effectiveness of the proposed method is demonstrated through a comprehensive simulation study and an application to the physical activity data collected by a wearable accelerometer monitor.
Suppose that particles are randomly distributed in $\bR^d$, and they are subject to identical stochastic motion independently of each other. The Smoluchowski process describes fluctuations of the number of particles in an observation region over time. This paper studies properties of the Smoluchowski processes and considers related statistical problems. In the first part of the paper we revisit probabilistic properties of the Smoluchowski process in a unified and principled way: explicit formulas for generating functionals and moments are derived, conditions for stationarity and Gaussian approximation are discussed, and relations to other stochastic models are highlighted. The second part deals with statistics of the Smoluchowki processes. We consider two different models of the particle displacement process: the undeviated uniform motion (when a particle moves with random constant velocity along a straight line) and the Brownian motion displacement. In the setting of the undeviated uniform motion we study the problems of estimating the mean speed and the speed distribution, while for the Brownian displacement model the problem of estimating the diffusion coefficient is considered. In all these settings we develop estimators with provable accuracy guarantees.
The paper describes a new class of capture-recapture models for closed populations when individual covariates are available. The novelty consists in combining a latent class model for capture probabilities where the class weights and the conditional distributions given the latent may depend on covariates, with a model for the marginal distribution of the available covariates as in Liu et al, Biometrika (2017). In addition, the conditional distributions given the latent and covariates are allowed to take into account any general form of serial dependence. A Fisher scoring algorithm for maximum likelihood estimation is presented, and a powerful result based on the implicit function theorem is used to show that the marginal distribution of observed covariates is uniquely determined, once an estimate of the probabilities of being never captured is available. Asymptotic results are outlined, and a procedure for constructing likelihood based confidence intervals for the population total is presented. Two examples with real data are used to illustrate the proposed approach
We propose straightforward nonparametric estimators for the mean and the covariance functions of functional data. Our setup covers a wide range of practical situations. The random trajectories are, not necessarily differentiable, have unknown regularity, and are measured with error at discrete design points. The measurement error could be heteroscedastic. The design points could be either randomly drawn or common for all curves. The definition of our nonparametric estimators depends on the local regularity of the stochastic process generating the functional data. We first propose a simple estimator of this local regularity which takes strength from the replication and regularization features of functional data. Next, we use the "smoothing first, then estimate" approach for the mean and the covariance functions. The new nonparametric estimators achieve optimal rates of convergence. They can be applied with both sparsely or densely sampled curves, are easy to calculate and to update, and perform well in simulations. Simulations built upon a real data example on household power consumption illustrate the effectiveness of the new approach.
We study ergodic properties of some Markov chains models in random environments when the random Markov kernels that define the dynamic satisfy some usual drift and small set conditions but with random coefficients. In particular, we adapt a standard coupling scheme used for getting geometric ergodic properties for homogeneous Markov chains to the random environment case and we prove the existence of a process of randomly invariant probability measures for such chains, in the spirit of the approach of Kifer for chains satisfying some Doeblin type conditions. We then deduce ergodic properties of such chains when the environment is itself ergodic. Our results complement and sharpen existing ones by providing quite weak and easily checkable assumptions on the random Markov kernels. As a by-product, we obtain a framework for studying some time series models with strictly exogenous covariates. We illustrate our results with autoregressive time series with functional coefficients and some threshold autoregressive processes.
The combination of numerical integration and deep learning, i.e., ODE-net, has been successfully employed in a variety of applications. In this work, we introduce inverse modified differential equations (IMDE) to contribute to the behaviour and error analysis of discovery of dynamics using ODE-net. It is shown that the difference between the learned ODE and the truncated IMDE is bounded by the sum of learning loss and a discrepancy which can be made sub exponentially small. In addition, we deduce that the total error of ODE-net is bounded by the sum of discrete error and learning loss. Furthermore, with the help of IMDE, theoretical results on learning Hamiltonian system are derived. Several experiments are performed to numerically verify our theoretical results.
The generalization error of a classifier is related to the complexity of the set of functions among which the classifier is chosen. Roughly speaking, the more complex the family, the greater the potential disparity between the training error and the population error of the classifier. This principle is embodied in layman's terms by Occam's razor principle, which suggests favoring low-complexity hypotheses over complex ones. We study a family of low-complexity classifiers consisting of thresholding the one-dimensional feature obtained by projecting the data on a random line after embedding it into a higher dimensional space parametrized by monomials of order up to k. More specifically, the extended data is projected n-times and the best classifier among those n (based on its performance on training data) is chosen. We obtain a bound on the generalization error of these low-complexity classifiers. The bound is less than that of any classifier with a non-trivial VC dimension, and thus less than that of a linear classifier. We also show that, given full knowledge of the class conditional densities, the error of the classifiers would converge to the optimal (Bayes) error as k and n go to infinity; if only a training dataset is given, we show that the classifiers will perfectly classify all the training points as k and n go to infinity.
Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or equivariant (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity and either an invariant or equivariant linear operator. Recently, Maron et al. (2019) showed that by allowing higher-order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions. Our main contribution is then an extension of this result to the equivariant case, which appears in many practical applications but has been less studied from a theoretical point of view. The proof relies on a new generalized Stone-Weierstrass theorem for algebra of equivariant functions, which is of independent interest. Finally, unlike many previous settings that consider a fixed number of nodes, our results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.
This paper addresses the problem of viewpoint estimation of an object in a given image. It presents five key insights that should be taken into consideration when designing a CNN that solves the problem. Based on these insights, the paper proposes a network in which (i) The architecture jointly solves detection, classification, and viewpoint estimation. (ii) New types of data are added and trained on. (iii) A novel loss function, which takes into account both the geometry of the problem and the new types of data, is propose. Our network improves the state-of-the-art results for this problem by 9.8%.