亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this study, we develop an asymptotic theory of nonparametric regression for a locally stationary functional time series. First, we introduce the notion of a locally stationary functional time series (LSFTS) that takes values in a semi-metric space. Then, we propose a nonparametric model for LSFTS with a regression function that changes smoothly over time. We establish the uniform convergence rates of a class of kernel estimators, the Nadaraya-Watson (NW) estimator of the regression function, and a central limit theorem of the NW estimator.

相關內容

This paper studies higher-order inference properties of nonparametric local polynomial regression methods under random sampling. We prove Edgeworth expansions for $t$ statistics and coverage error expansions for interval estimators that (i) hold uniformly in the data generating process, (ii) allow for the uniform kernel, and (iii) cover estimation of derivatives of the regression function. The terms of the higher-order expansions, and their associated rates as a function of the sample size and bandwidth sequence, depend on the smoothness of the population regression function, the smoothness exploited by the inference procedure, and on whether the evaluation point is in the interior or on the boundary of the support. We prove that robust bias corrected confidence intervals have the fastest coverage error decay rates in all cases, and we use our results to deliver novel, inference-optimal bandwidth selectors. The main methodological results are implemented in companion \textsf{R} and \textsf{Stata} software packages.

Suppose we observe an infinite series of coin flips $X_1,X_2,\ldots$, and wish to sequentially test the null that these binary random variables are exchangeable. Nonnegative supermartingales (NSMs) are a workhorse of sequential inference, but we prove that they are powerless for this problem. First, utilizing a geometric concept called fork-convexity (a sequential analog of convexity), we show that any process that is an NSM under a set of distributions, is also necessarily an NSM under their "fork-convex hull". Second, we demonstrate that the fork-convex hull of the exchangeable null consists of all possible laws over binary sequences; this implies that any NSM under exchangeability is necessarily nonincreasing, hence always yields a powerless test for any alternative. Since testing arbitrary deviations from exchangeability is information theoretically impossible, we focus on Markovian alternatives. We combine ideas from universal inference and the method of mixtures to derive a "safe e-process", which is a nonnegative process with expectation at most one under the null at any stopping time, and is upper bounded by a martingale, but is not itself an NSM. This in turn yields a level $\alpha$ sequential test that is consistent; regret bounds from universal coding also demonstrate rate-optimal power. We present ways to extend these results to any finite alphabet and to Markovian alternatives of any order using a "double mixture" approach. We provide an array of simulations, and give general approaches based on betting for unstructured or ill-specified alternatives. Finally, inspired by Shafer, Vovk, and Ville, we provide game-theoretic interpretations of our e-processes and pathwise results.

Heatmap-based methods dominate in the field of human pose estimation by modelling the output distribution through likelihood heatmaps. In contrast, regression-based methods are more efficient but suffer from inferior performance. In this work, we explore maximum likelihood estimation (MLE) to develop an efficient and effective regression-based methods. From the perspective of MLE, adopting different regression losses is making different assumptions about the output density function. A density function closer to the true distribution leads to a better regression performance. In light of this, we propose a novel regression paradigm with Residual Log-likelihood Estimation (RLE) to capture the underlying output distribution. Concretely, RLE learns the change of the distribution instead of the unreferenced underlying distribution to facilitate the training process. With the proposed reparameterization design, our method is compatible with off-the-shelf flow models. The proposed method is effective, efficient and flexible. We show its potential in various human pose estimation tasks with comprehensive experiments. Compared to the conventional regression paradigm, regression with RLE bring 12.4 mAP improvement on MSCOCO without any test-time overhead. Moreover, for the first time, especially on multi-person pose estimation, our regression method is superior to the heatmap-based methods. Our code is available at //github.com/Jeff-sjtu/res-loglikelihood-regression

Independent $p$-dimensional vectors with independent complex or real valued entries such that $\mathbb{E} [\mathbf{x}_i] = \mathbf{0}$, ${\rm Var } (\mathbf{x}_i) = \mathbf{I}_p$, $i=1, \ldots,n$, let $\mathbf{T }_n$ be a $p \times p$ Hermitian nonnegative definite matrix and $f $ be a given function. We prove that an approriately standardized version of the stochastic process $ \big ( {\operatorname{tr}} ( f(\mathbf{B}_{n,t}) ) \big )_{t \in [t_0, 1]} $ corresponding to a linear spectral statistic of the sequential empirical covariance estimator $$ \big ( \mathbf{B}_{n,t} )_{t\in [ t_0 , 1]} = \Big ( \frac{1}{n} \sum_{i=1}^{\lfloor n t \rfloor} \mathbf{T }^{1/2}_n \mathbf{x}_i \mathbf{x}_i ^\star \mathbf{T }^{1/2}_n \Big)_{t\in [ t_0 , 1]} $$ converges weakly to a non-standard Gaussian process for $n,p\to\infty$. As an application we use these results to develop a novel approach for monitoring the sphericity assumption in a high-dimensional framework, even if the dimension of the underlying data is larger than the sample size.

With the availability of high dimensional genetic biomarkers, it is of interest to identify heterogeneous effects of these predictors on patients' survival, along with proper statistical inference. Censored quantile regression has emerged as a powerful tool for detecting heterogeneous effects of covariates on survival outcomes. To our knowledge, there is little work available to draw inference on the effects of high dimensional predictors for censored quantile regression. This paper proposes a novel procedure to draw inference on all predictors within the framework of global censored quantile regression, which investigates covariate-response associations over an interval of quantile levels, instead of a few discrete values. The proposed estimator combines a sequence of low dimensional model estimates that are based on multi-sample splittings and variable selection. We show that, under some regularity conditions, the estimator is consistent and asymptotically follows a Gaussian process indexed by the quantile level. Simulation studies indicate that our procedure can properly quantify the uncertainty of the estimates in high dimensional settings. We apply our method to analyze the heterogeneous effects of SNPs residing in lung cancer pathways on patients' survival, using the Boston Lung Cancer Survival Cohort, a cancer epidemiology study on the molecular mechanism of lung cancer.

We consider parameter estimation in distributed networks, where each sensor in the network observes an independent sample from an underlying distribution and has $k$ bits to communicate its sample to a centralized processor which computes an estimate of a desired parameter. We develop lower bounds for the minimax risk of estimating the underlying parameter for a large class of losses and distributions. Our results show that under mild regularity conditions, the communication constraint reduces the effective sample size by a factor of $d$ when $k$ is small, where $d$ is the dimension of the estimated parameter. Furthermore, this penalty reduces at most exponentially with increasing $k$, which is the case for some models, e.g., estimating high-dimensional distributions. For other models however, we show that the sample size reduction is re-mediated only linearly with increasing $k$, e.g. when some sub-Gaussian structure is available. We apply our results to the distributed setting with product Bernoulli model, multinomial model, Gaussian location models, and logistic regression which recover or strengthen existing results. Our approach significantly deviates from existing approaches for developing information-theoretic lower bounds for communication-efficient estimation. We circumvent the need for strong data processing inequalities used in prior work and develop a geometric approach which builds on a new representation of the communication constraint. This approach allows us to strengthen and generalize existing results with simpler and more transparent proofs.

Let $\mathbf{H}$ be the cartesian product of a family of finite abelian groups indexed by a finite set $\Omega$. A given poset (i.e., partially ordered set) $\mathbf{P}=(\Omega,\preccurlyeq_{\mathbf{P}})$ gives rise to a poset metric on $\mathbf{H}$, which further leads to a partition $\mathcal{Q}(\mathbf{H},\mathbf{P})$ of $\mathbf{H}$. We prove that if $\mathcal{Q}(\mathbf{H},\mathbf{P})$ is Fourier-reflexive, then its dual partition $\Lambda$ coincides with the partition of $\hat{\mathbf{H}}$ induced by $\mathbf{\overline{P}}$, the dual poset of $\mathbf{P}$, and moreover, $\mathbf{P}$ is necessarily hierarchical. This result establishes a conjecture proposed by Gluesing-Luerssen in \cite{4}. We also show that with some other assumptions, $\Lambda$ is finer than the partition of $\hat{\mathbf{H}}$ induced by $\mathbf{\overline{P}}$. In addition, we give some necessary and sufficient conditions for $\mathbf{P}$ to be hierarchical, and for the case that $\mathbf{P}$ is hierarchical, we give an explicit criterion for determining whether two codewords in $\hat{\mathbf{H}}$ belong to the same block of $\Lambda$. We prove these results by relating the involved partitions with certain family of polynomials, a generalized version of which is also proposed and studied to generalize the aforementioned results.

In this paper, we study the properties of robust nonparametric estimation using deep neural networks for regression models with heavy tailed error distributions. We establish the non-asymptotic error bounds for a class of robust nonparametric regression estimators using deep neural networks with ReLU activation under suitable smoothness conditions on the regression function and mild conditions on the error term. In particular, we only assume that the error distribution has a finite p-th moment with p greater than one. We also show that the deep robust regression estimators are able to circumvent the curse of dimensionality when the distribution of the predictor is supported on an approximate lower-dimensional set. An important feature of our error bound is that, for ReLU neural networks with network width and network size (number of parameters) no more than the order of the square of the dimensionality d of the predictor, our excess risk bounds depend sub-linearly on d. Our assumption relaxes the exact manifold support assumption, which could be restrictive and unrealistic in practice. We also relax several crucial assumptions on the data distribution, the target regression function and the neural networks required in the recent literature. Our simulation studies demonstrate that the robust methods can significantly outperform the least squares method when the errors have heavy-tailed distributions and illustrate that the choice of loss function is important in the context of deep nonparametric regression.

Discrete Bayesian nonparametric models whose expectation is a convex linear combination of a point mass at some point of the support and a diffuse probability distribution allow to incorporate strong prior information, while still being extremely flexible. Recent contributions in the statistical literature have successfully implemented such a modelling strategy in a variety of applications, including density estimation, nonparametric regression and model-based clustering. We provide a thorough study of a large class of nonparametric models we call inner spike and slab hNRMI models, which are obtained by considering homogeneous normalized random measures with independent increments (hNRMI) with base measure given by a convex linear combination of a point mass and a diffuse probability distribution. In this paper we investigate the distributional properties of these models and our results include: i) the exchangeable partition probability function they induce, ii) the distribution of the number of distinct values in an exchangeable sample, iii) the posterior predictive distribution, and iv) the distribution of the number of elements that coincide with the only point of the support with positive probability. Our findings are the main building block for an actual implementation of Bayesian inner spike and slab hNRMI models by means of a generalized P\'olya urn scheme.

Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.

北京阿比特科技有限公司