In this study, we develop an asymptotic theory of nonparametric regression for a locally stationary functional time series. First, we introduce the notion of a locally stationary functional time series (LSFTS) that takes values in a semi-metric space. Then, we propose a nonparametric model for LSFTS with a regression function that changes smoothly over time. We establish the uniform convergence rates of a class of kernel estimators, the Nadaraya-Watson (NW) estimator of the regression function, and a central limit theorem of the NW estimator.
We observe $n$ pairs of independent random variables $X_{1}=(W_{1},Y_{1}),\ldots,X_{n}=(W_{n},Y_{n})$ and assume, although this might not be true, that for each $i\in\{1,\ldots,n\}$, the conditional distribution of $Y_{i}$ given $W_{i}$ belongs to a given exponential family with real parameter $\theta_{i}^{\star}=\boldsymbol{\theta}^{\star}(W_{i})$ the value of which is an unknown function $\boldsymbol{\theta}^{\star}$ of the covariate $W_{i}$. Given a model $\boldsymbol{\overline\Theta}$ for $\boldsymbol{\theta}^{\star}$, we propose an estimator $\boldsymbol{\widehat \theta}$ with values in $\boldsymbol{\overline\Theta}$ the construction of which is independent of the distribution of the $W_{i}$. We show that $\boldsymbol{\widehat \theta}$ possesses the properties of being robust to contamination, outliers and model misspecification. We establish non-asymptotic exponential inequalities for the upper deviations of a Hellinger-type distance between the true distribution of the data and the estimated one based on $\boldsymbol{\widehat \theta}$. We deduce a uniform risk bound for $\boldsymbol{\widehat \theta}$ over the class of H\"olderian functions and we prove the optimality of this bound up to a logarithmic factor. Finally, we provide an algorithm for calculating $\boldsymbol{\widehat \theta}$ when $\boldsymbol{\theta}^{\star}$ is assumed to belong to functional classes of low or medium dimensions (in a suitable sense) and, on a simulation study, we compare the performance of $\boldsymbol{\widehat \theta}$ to that of the MLE and median-based estimators. The proof of our main result relies on an upper bound, with explicit numerical constants, on the expectation of the supremum of an empirical process over a VC-subgraph class. This bound can be of independent interest.
In this study, we propose a function-on-function linear quantile regression model that allows for more than one functional predictor to establish a more flexible and robust approach. The proposed model is first transformed into a finite-dimensional space via the functional principal component analysis paradigm in the estimation phase. It is then approximated using the estimated functional principal component functions, and the estimated parameter of the quantile regression model is constructed based on the principal component scores. In addition, we propose a Bayesian information criterion to determine the optimum number of truncation constants used in the functional principal component decomposition. Moreover, a stepwise forward procedure and the Bayesian information criterion are used to determine the significant predictors for including in the model. We employ a nonparametric bootstrap procedure to construct prediction intervals for the response functions. The finite sample performance of the proposed method is evaluated via several Monte Carlo experiments and an empirical data example, and the results produced by the proposed method are compared with the ones from existing models.
We describe a (nonparametric) prediction algorithm for spatial data, based on a canonical factorization of the spectral density function. We provide theoretical results showing that the predictor has desirable asymptotic properties. Finite sample performance is assessed in a Monte Carlo study that also compares our algorithm to a rival nonparametric method based on the infinite AR representation of the dynamics of the data. Finally, we apply our methodology to predict house prices in Los Angeles.
In problems with large amounts of missing data one must model two distinct data generating processes: the outcome process which generates the response and the missing data mechanism which determines the data we observe. Under the ignorability condition of Rubin (1976), however, likelihood-based inference for the outcome process does not depend on the missing data mechanism so that only the former needs to be estimated; partially because of this simplification, ignorability is often used as a baseline assumption. We study the implications of Bayesian ignorability in the presence of high-dimensional nuisance parameters and argue that ignorability is typically incompatible with sensible prior beliefs about the amount of selection bias. We show that, for many problems, ignorability directly implies that the prior on the selection bias is tightly concentrated around zero. This is demonstrated on several models of practical interest, and the effect of ignorability on the posterior distribution is characterized for high-dimensional linear models with a ridge regression prior. We then show both how to build high-dimensional models which encode sensible beliefs about the selection bias and also show that under certain narrow circumstances ignorability is less problematic.
This study concerns probability distribution estimation of sample maximum. The traditional approach is the parametric fitting to the limiting distribution - the generalized extreme value distribution; however, the model in finite cases is misspecified to a certain extent. We propose a plug-in type of nonparametric estimator which does not need model specification. It is proved that both asymptotic convergence rates depend on the tail index and the second order parameter. As the tail gets light, the degree of misspecification of the parametric fitting becomes large, that means the convergence rate becomes slow. In the Weibull cases, which can be seen as the limit of tail-lightness, only the nonparametric distribution estimator keeps its consistency. Finally, we report the results of numerical experiments.
Large-scale modern data often involves estimation and testing for high-dimensional unknown parameters. It is desirable to identify the sparse signals, ``the needles in the haystack'', with accuracy and false discovery control. However, the unprecedented complexity and heterogeneity in modern data structure require new machine learning tools to effectively exploit commonalities and to robustly adjust for both sparsity and heterogeneity. In addition, estimates for high-dimensional parameters often lack uncertainty quantification. In this paper, we propose a novel Spike-and-Nonparametric mixture prior (SNP) -- a spike to promote the sparsity and a nonparametric structure to capture signals. In contrast to the state-of-the-art methods, the proposed methods solve the estimation and testing problem at once with several merits: 1) an accurate sparsity estimation; 2) point estimates with shrinkage/soft-thresholding property; 3) credible intervals for uncertainty quantification; 4) an optimal multiple testing procedure that controls false discovery rate. Our method exhibits promising empirical performance on both simulated data and a gene expression case study.
Estimating the structures at high or low quantiles has become an important subject and attracted increasing attention across numerous fields. However, due to data sparsity at tails, it usually is a challenging task to obtain reliable estimation, especially for high-dimensional data. This paper suggests a flexible parametric structure to tails, and this enables us to conduct the estimation at quantile levels with rich observations and then to extrapolate the fitted structures to far tails. The proposed model depends on some quantile indices and hence is called the quantile index regression. Moreover, the composite quantile regression method is employed to obtain non-crossing quantile estimators, and this paper further establishes their theoretical properties, including asymptotic normality for the case with low-dimensional covariates and non-asymptotic error bounds for that with high-dimensional covariates. Simulation studies and an empirical example are presented to illustrate the usefulness of the new model.
We consider nonparametric prediction with multiple covariates, in particular categorical or functional predictors, or a mixture of both. The method proposed bases on an extension of the Nadaraya-Watson estimator where a kernel function is applied on a linear combination of distance measures each calculated on single covariates, with weights being estimated from the training data. The dependent variable can be categorical (binary or multi-class) or continuous, thus we consider both classification and regression problems. The methodology presented is illustrated and evaluated on artificial and real world data. Particularly it is observed that prediction accuracy can be increased, and irrelevant, noise variables can be identified/removed by "downgrading" the corresponding distance measures in a completely data-driven way.
We study the least square estimator, in the framework of simple linear regression, when the deviance term $\varepsilon$ with respect to the linear model is modeled by a uniform distribution. In particular, we give the law of this estimator, and prove some convergence properties.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.