In Chen and Zhou 2021, they consider an inference problem for an Ornstein-Uhlenbeck process driven by a general one-dimensional centered Gaussian process $(G_t)_{t\ge 0}$. The second order mixed partial derivative of the covariance function $ R(t,\, s)=\mathbb{E}[G_t G_s]$ can be decomposed into two parts, one of which coincides with that of fractional Brownian motion and the other is bounded by $(ts)^{H-1}$ with $H\in (\frac12,\,1)$, up to a constant factor. In this paper, we investigate the same problem but with the assumption of $H\in (0,\,\frac12)$. It is well known that there is a significant difference between the Hilbert space associated with the fractional Gaussian processes in the case of $H\in (\frac12, 1)$ and that of $H\in (0, \frac12)$. The starting point of this paper is a new relationship between the inner product of $\mathfrak{H}$ associated with the Gaussian process $(G_t)_{t\ge 0}$ and that of the Hilbert space $\mathfrak{H}_1$ associated with the fractional Brownian motion $(B^{H}_t)_{t\ge 0}$. Then we prove the strong consistency with $H\in (0, \frac12)$, and the asymptotic normality and the Berry-Ess\'{e}en bounds with $H\in (0,\frac38)$ for both the least squares estimator and the moment estimator of the drift parameter constructed from the continuous observations. A good many inequality estimates are involved in and we also make use of the estimation of the inner product based on the results of $\mathfrak{H}_1$ in Hu, Nualart and Zhou 2019.
When the regression function belongs to the standard smooth classes consisting of univariate functions with derivatives up to the $(\gamma+1)$th order bounded in absolute values by a common constant everywhere or a.e., it is well known that the minimax optimal rate of convergence in mean squared error (MSE) is $\left(\frac{\sigma^{2}}{n}\right)^{\frac{2\gamma+2}{2\gamma+3}}$ when $\gamma$ is finite and the sample size $n\rightarrow\infty$. From a nonasymptotic viewpoint that does not take $n$ to infinity, this paper shows that: for the standard H\"older and Sobolev classes, the minimax optimal rate is $\frac{\sigma^{2}\left(\gamma+1\right)}{n}$ ($\succsim\left(\frac{\sigma^{2}}{n}\right)^{\frac{2\gamma+2}{2\gamma+3}}$) when $\frac{n}{\sigma^{2}}\precsim\left(\gamma+1\right)^{2\gamma+3}$ and $\left(\frac{\sigma^{2}}{n}\right)^{\frac{2\gamma+2}{2\gamma+3}}$ ($\succsim\frac{\sigma^{2}\left(\gamma+1\right)}{n}$) when $\frac{n}{\sigma^{2}}\succsim\left(\gamma+1\right)^{2\gamma+3}$. To establish these results, we derive upper and lower bounds on the covering and packing numbers for the generalized H\"older class where the absolute value of the $k$th ($k=0,...,\gamma$) derivative is bounded by a parameter $R_{k}$ and the $\gamma$th derivative is $R_{\gamma+1}-$Lipschitz (and also for the generalized ellipsoid class of smooth functions). Our bounds sharpen the classical metric entropy results for the standard classes, and give the general dependence on $\gamma$ and $R_{k}$. By deriving the minimax optimal MSE rates under various (well motivated) $R_{k}$s for the smooth classes with the help of our new entropy bounds, we show several interesting results that cannot be shown with the existing entropy bounds in the literature.
Let $(Z_n)_{n\geq0}$ be a supercritical Galton-Watson process. The Lotka-Nagaev estimator $Z_{n+1}/Z_n$ is a common estimator for the offspring mean.In this paper, we establish some Cram\'{e}r moderate deviation results for the Lotka-Nagaev estimator via a martingale method. Applications to construction of confidence intervals are also given.
Meta-elliptical copulas are often proposed to model dependence between the components of a random vector. They are specified by a correlation matrix and a map $g$, called density generator. While the latter correlation matrix can easily be estimated from pseudo-samples of observations, the density generator is harder to estimate, especially when it does not belong to a parametric family. We give sufficient conditions to non-parametrically identify this generator. Several nonparametric estimators of $g$ are then proposed, by M-estimation, simulation-based inference, or by an iterative procedure available in the R package ElliptCopulas. Some simulations illustrate the relevance of the latter method.
We prove a Central Limit Theorem for the empirical optimal transport cost, $\sqrt{\frac{nm}{n+m}}\{\mathcal{T}_c(P_n,Q_m)-\mathcal{T}_c(P,Q)\}$, in the semi discrete case, i.e when the distribution $P$ is supported in $N$ points, but without assumptions on $Q$. We show that the asymptotic distribution is the supremun of a centered Gaussian process, which is Gaussian under some additional conditions on the probability $Q$ and on the cost. Such results imply the central limit theorem for the $p$-Wassertein distance, for $p\geq 1$. This means that, for fixed $N$, the curse of dimensionality is avoided. To better understand the influence of such $N$, we provide bounds of $E|\mathcal{W}_1(P,Q_m)-\mathcal{W}_1(P,Q)|$ depending on $m$ and $N$. Finally, the semidiscrete framework provides a control on the second derivative of the dual formulation, which yields the first central limit theorem for the optimal transport potentials. The results are supported by simulations that help to visualize the given limits and bounds. We analyse also the cases where classical bootstrap works.
Assume that we observe i.i.d.~points lying close to some unknown $d$-dimensional $\mathcal{C}^k$ submanifold $M$ in a possibly high-dimensional space. We study the problem of reconstructing the probability distribution generating the sample. After remarking that this problem is degenerate for a large class of standard losses ($L_p$, Hellinger, total variation, etc.), we focus on the Wasserstein loss, for which we build an estimator, based on kernel density estimation, whose rate of convergence depends on $d$ and the regularity $s\leq k-1$ of the underlying density, but not on the ambient dimension. In particular, we show that the estimator is minimax and matches previous rates in the literature in the case where the manifold $M$ is a $d$-dimensional cube. The related problem of the estimation of the volume measure of $M$ for the Wasserstein loss is also considered, for which a minimax estimator is exhibited.
The estimation of parameter standard errors for semi-variogram models is challenging, given the two-step process required to fit a parametric model to spatially correlated data. Motivated by an application in the social-epidemiology, we focus on exponential semi-variogram models fitted to data between 500 to 2000 observations and little control over the sampling design. Previously proposed methods for the estimation of standard errors cannot be applied in this context. Approximate closed form solutions are too costly using generalized least squares in terms of memory capacities. The generalized bootstrap proposed by Olea and Pardo-Ig\'uzquiza is nonetheless applicable with weighted instead of generalized least squares. However, the standard error estimates are hugely biased and imprecise. Therefore, we propose a filtering method added to the generalized bootstrap. The new development is presented and evaluated with a simulation study which shows that the generalized bootstrap with check-based filtering leads to massively improved results compared to the quantile-based filter method and previously developed approaches. We provide a case study using birthweight data.
As a traditional and widely-adopted mortality rate projection technique, by representing the log mortality rate as a simple bilinear form $\log(m_{x,t})=a_x+b_xk_t$. The Lee-Carter model has been extensively studied throughout the past 30 years, however, the performance of the model in the presence of outliers has been paid little attention, particularly for the parameter estimation of $b_x$. In this paper, we propose a robust estimation method for Lee-Carter model by formulating it as a probabilistic principal component analysis (PPCA) with multivariate $t$-distributions, and an efficient expectation-maximization (EM) algorithm for implementation. The advantages of the method are threefold. It yields significantly more robust estimates of both $b_x$ and $k_t$, preserves the fundamental interpretation for $b_x$ as the first principal component as in the traditional approach and is flexible to be integrated into other existing time series models for $k_t$. The parameter uncertainties are examined by adopting a standard residual bootstrap. A simulation study based on Human Mortality Database shows superior performance of the proposed model compared to other conventional approaches.
An intensive line of research on fixed parameter tractability of integer programming is focused on exploiting the relation between the sparsity of a constraint matrix $A$ and the norm of the elements of its Graver basis. In particular, integer programming is fixed parameter tractable when parameterized by the primal tree-depth and the entry complexity of $A$, and when parameterized by the dual tree-depth and the entry complexity of $A$; both these parameterization imply that $A$ is sparse, in particular, the number of its non-zero entries is linear in the number of columns or rows, respectively. We study preconditioners transforming a given matrix to an equivalent sparse matrix if it exists and provide structural results characterizing the existence of a sparse equivalent matrix in terms of the structural properties of the associated column matroid. In particular, our results imply that the $\ell_1$-norm of the Graver basis is bounded by a function of the maximum $\ell_1$-norm of a circuit of $A$. We use our results to design a parameterized algorithm that constructs a matrix equivalent to an input matrix $A$ that has small primal/dual tree-depth and entry complexity if such an equivalent matrix exists. Our results yield parameterized algorithms for integer programming when parameterized by the $\ell_1$-norm of the Graver basis of the constraint matrix, when parameterized by the $\ell_1$-norm of the circuits of the constraint matrix, when parameterized by the smallest primal tree-depth and entry complexity of a matrix equivalent to the constraint matrix, and when parameterized by the smallest dual tree-depth and entry complexity of a matrix equivalent to the constraint matrix.
We investigate the complexity of explicit construction problems, where the goal is to produce a particular object of size $n$ possessing some pseudorandom property in time polynomial in $n$. We give overwhelming evidence that $\bf{APEPP}$, defined originally by Kleinberg et al., is the natural complexity class associated with explicit constructions of objects whose existence follows from the probabilistic method, by placing a variety of such construction problems in this class. We then demonstrate that a result of Je\v{r}\'{a}bek on provability in Bounded Arithmetic, when reinterpreted as a reduction between search problems, shows that constructing a truth table of high circuit complexity is complete for $\bf{APEPP}$ under $\bf{P}^{\bf{NP}}$ reductions. This illustrates that Shannon's classical proof of the existence of hard boolean functions is in fact a $\textit{universal}$ probabilistic existence argument: derandomizing his proof implies a generic derandomization of the probabilistic method. As a corollary, we prove that $\bf{EXP}^{\bf{NP}}$ contains a language of circuit complexity $2^{n^{\Omega(1)}}$ if and only if it contains a language of circuit complexity $\frac{2^n}{2n}$. Finally, for several of the problems shown to lie in $\bf{APEPP}$, we demonstrate direct polynomial time reductions to the explicit construction of hard truth tables.
Off-Policy Evaluation (OPE) serves as one of the cornerstones in Reinforcement Learning (RL). Fitted Q Evaluation (FQE) with various function approximators, especially deep neural networks, has gained practical success. While statistical analysis has proved FQE to be minimax-optimal with tabular, linear and several nonparametric function families, its practical performance with more general function approximator is less theoretically understood. We focus on FQE with general differentiable function approximators, making our theory applicable to neural function approximations. We approach this problem using the Z-estimation theory and establish the following results: The FQE estimation error is asymptotically normal with explicit variance determined jointly by the tangent space of the function class at the ground truth, the reward structure, and the distribution shift due to off-policy learning; The finite-sample FQE error bound is dominated by the same variance term, and it can also be bounded by function class-dependent divergence, which measures how the off-policy distribution shift intertwines with the function approximator. In addition, we study bootstrapping FQE estimators for error distribution inference and estimating confidence intervals, accompanied by a Cramer-Rao lower bound that matches our upper bounds. The Z-estimation analysis provides a generalizable theoretical framework for studying off-policy estimation in RL and provides sharp statistical theory for FQE with differentiable function approximators.