亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We say that a continuous real-valued function $x$ admits the Hurst roughness exponent $H$ if the $p^{\text{th}}$ variation of $x$ converges to zero if $p>1/H$ and to infinity if $p<1/H$. For the sample paths of many stochastic processes, such as fractional Brownian motion, the Hurst roughness exponent exists and equals the standard Hurst parameter. In our main result, we provide a mild condition on the Faber--Schauder coefficients of $x$ under which the Hurst roughness exponent exists and is given as the limit of the classical Gladyshev estimates $\wh H_n(x)$. This result can be viewed as a strong consistency result for the Gladyshev estimators in an entirely model-free setting, because no assumption whatsoever is made on the possible dynamics of the function $x$. Nonetheless, our proof is probabilistic and relies on a martingale that is hidden in the Faber--Schauder expansion of $x$. Since the Gladyshev estimators are not scale-invariant, we construct several scale-invariant estimators that are derived from the sequence $(\wh H_n)_{n\in\bN}$. We also discuss how a dynamic change in the Hurst roughness parameter of a time series can be detected. Our results are illustrated by means of high-frequency financial times series.

相關內容

The saddlepoint approximation gives an approximation to the density of a random variable in terms of its moment generating function. When the underlying random variable is itself the sum of $n$ unobserved i.i.d. terms, the basic classical result is that the relative error in the density is of order $1/n$. If instead the approximation is interpreted as a likelihood and maximised as a function of model parameters, the result is an approximation to the maximum likelihood estimate (MLE) that can be much faster to compute than the true MLE. This paper proves the analogous basic result for the approximation error between the saddlepoint MLE and the true MLE: subject to certain explicit identifiability conditions, the error has asymptotic size $O(1/n^2)$ for some parameters, and $O(1/n^{3/2})$ or $O(1/n)$ for others. In all three cases, the approximation errors are asymptotically negligible compared to the inferential uncertainty. The proof is based on a factorisation of the saddlepoint likelihood into an exact and approximate term, along with an analysis of the approximation error in the gradient of the log-likelihood. This factorisation also gives insight into alternatives to the saddlepoint approximation, including a new and simpler saddlepoint approximation, for which we derive analogous error bounds. As a corollary of our results, we also obtain the asymptotic size of the MLE error approximation when the saddlepoint approximation is replaced by the normal approximation.

We study the off-policy evaluation (OPE) problem in an infinite-horizon Markov decision process with continuous states and actions. We recast the $Q$-function estimation into a special form of the nonparametric instrumental variables (NPIV) estimation problem. We first show that under one mild condition the NPIV formulation of $Q$-function estimation is well-posed in the sense of $L^2$-measure of ill-posedness with respect to the data generating distribution, bypassing a strong assumption on the discount factor $\gamma$ imposed in the recent literature for obtaining the $L^2$ convergence rates of various $Q$-function estimators. Thanks to this new well-posed property, we derive the first minimax lower bounds for the convergence rates of nonparametric estimation of $Q$-function and its derivatives in both sup-norm and $L^2$-norm, which are shown to be the same as those for the classical nonparametric regression (Stone, 1982). We then propose a sieve two-stage least squares estimator and establish its rate-optimality in both norms under some mild conditions. Our general results on the well-posedness and the minimax lower bounds are of independent interest to study not only other nonparametric estimators for $Q$-function but also efficient estimation on the value of any target policy in off-policy settings.

We propose two new statistics, V and S, to disentangle the population history of related populations from SNP frequency data. If the populations are related by a tree, we show by theoretical means as well as by simulation that the new statistics are able to identify the root of a tree correctly, in contrast to standard statistics, such as the observed matrix of F2-statistics (distances between pairs of populations). The statistic V is obtained by averaging over all SNPs (similar to standard statistics). Its expectation is the true covariance matrix of the observed population SNP frequencies, offset by a matrix with identical entries. In contrast, the statistic S is put in a Bayesian context and is obtained by averaging over pairs of SNPs, such that each SNP is only used once. It thus makes use of the joint distribution of pairs of SNPs. In addition, we provide a number of novel mathematical results about old and new statistics, and their mutual relationship.

Estimating the mixing density of a mixture distribution remains an interesting problem in statistics literature. Using a stochastic approximation method, Newton and Zhang (1999) introduced a fast recursive algorithm for estimating the mixing density of a mixture. Under suitably chosen weights the stochastic approximation estimator converges to the true solution. In Tokdar et. al. (2009) the consistency of this recursive estimation method was established. However, the proof of consistency of the resulting estimator used independence among observations as an assumption. Here, we extend the investigation of performance of Newton's algorithm to several dependent scenarios. We first prove that the original algorithm under certain conditions remains consistent when the observations are arising form a weakly dependent process with fixed marginal with the target mixture as the marginal density. For some of the common dependent structures where the original algorithm is no longer consistent, we provide a modification of the algorithm that generates a consistent estimator.

Structural matrix-variate observations routinely arise in diverse fields such as multi-layer network analysis and brain image clustering. While data of this type have been extensively investigated with fruitful outcomes being delivered, the fundamental questions like its statistical optimality and computational limit are largely under-explored. In this paper, we propose a low-rank Gaussian mixture model (LrMM) assuming each matrix-valued observation has a planted low-rank structure. Minimax lower bounds for estimating the underlying low-rank matrix are established allowing a whole range of sample sizes and signal strength. Under a minimal condition on signal strength, referred to as the information-theoretical limit or statistical limit, we prove the minimax optimality of a maximum likelihood estimator which, in general, is computationally infeasible. If the signal is stronger than a certain threshold, called the computational limit, we design a computationally fast estimator based on spectral aggregation and demonstrate its minimax optimality. Moreover, when the signal strength is smaller than the computational limit, we provide evidences based on the low-degree likelihood ratio framework to claim that no polynomial-time algorithm can consistently recover the underlying low-rank matrix. Our results reveal multiple phase transitions in the minimax error rates and the statistical-to-computational gap. Numerical experiments confirm our theoretical findings. We further showcase the merit of our spectral aggregation method on the worldwide food trading dataset.

Continuous determinantal point processes (DPPs) are a class of repulsive point processes on $\mathbb{R}^d$ with many statistical applications. Although an explicit expression of their density is known, it is too complicated to be used directly for maximum likelihood estimation. In the stationary case, an approximation using Fourier series has been suggested, but it is limited to rectangular observation windows and no theoretical results support it. In this contribution, we investigate a different way to approximate the likelihood by looking at its asymptotic behaviour when the observation window grows towards $\mathbb{R}^d$. This new approximation is not limited to rectangular windows, is faster to compute than the previous one, does not require any tuning parameter, and some theoretical justifications are provided. It moreover provides an explicit formula for estimating the asymptotic variance of the associated estimator. The performances are assessed in a simulation study on standard parametric models on $\mathbb{R}^d$ and compare favourably to common alternative estimation methods for continuous DPPs.

We exhibit the analog of the entropy map for multivariate Gaussian distributions on local fields. As in the real case, the image of this map lies in the supermodular cone and it determines the distribution of the valuation vector. In general, this map can be defined for non-archimedian valued fields whose valuation group is an additive subgroup of the real line, and it remains supermodular. We also explicitly compute the image of this map in dimension 3.

This paper concerns the verification of continuous-time polynomial spline trajectories against linear temporal logic specifications (LTL without 'next'). Each atomic proposition is assumed to represent a state space region described by a multivariate polynomial inequality. The proposed approach samples a trajectory strategically, to capture every one of its region transitions. This yields a discrete word called a trace, which is amenable to established formal methods for path checking. The original continuous-time trajectory is shown to satisfy the specification if and only if its trace does. General topological conditions on the sample points are derived that ensure a trace is recorded for arbitrary continuous paths, given arbitrary region descriptions. Using techniques from computer algebra, a trace generation algorithm is developed to satisfy these conditions when the path and region boundaries are defined by polynomials. The proposed PolyTrace algorithm has polynomial complexity in the number of atomic propositions, and is guaranteed to produce a trace of any polynomial path. Its performance is demonstrated via numerical examples and a case study from robotics.

In the standard Gaussian linear measurement model $Y=X\mu_0+\xi \in \mathbb{R}^m$ with a fixed noise level $\sigma>0$, we consider the problem of estimating the unknown signal $\mu_0$ under a convex constraint $\mu_0 \in K$, where $K$ is a closed convex set in $\mathbb{R}^n$. We show that the risk of the natural convex constrained least squares estimator (LSE) $\hat{\mu}(\sigma)$ can be characterized exactly in high dimensional limits, by that of the convex constrained LSE $\hat{\mu}_K^{\mathsf{seq}}$ in the corresponding Gaussian sequence model at a different noise level. The characterization holds (uniformly) for risks in the maximal regime that ranges from constant order all the way down to essentially the parametric rate, as long as certain necessary non-degeneracy condition is satisfied for $\hat{\mu}(\sigma)$. The precise risk characterization reveals a fundamental difference between noiseless (or low noise limit) and noisy linear inverse problems in terms of the sample complexity for signal recovery. A concrete example is given by the isotonic regression problem: While exact recovery of a general monotone signal requires $m\gg n^{1/3}$ samples in the noiseless setting, consistent signal recovery in the noisy setting requires as few as $m\gg \log n$ samples. Such a discrepancy occurs when the low and high noise risk behavior of $\hat{\mu}_K^{\mathsf{seq}}$ differ significantly. In statistical languages, this occurs when $\hat{\mu}_K^{\mathsf{seq}}$ estimates $0$ at a faster `adaptation rate' than the slower `worst-case rate' for general signals. Several other examples, including non-negative least squares and generalized Lasso (in constrained forms), are also worked out to demonstrate the concrete applicability of the theory in problems of different types.

We study the problem of learning in the stochastic shortest path (SSP) setting, where an agent seeks to minimize the expected cost accumulated before reaching a goal state. We design a novel model-based algorithm EB-SSP that carefully skews the empirical transitions and perturbs the empirical costs with an exploration bonus to guarantee both optimism and convergence of the associated value iteration scheme. We prove that EB-SSP achieves the minimax regret rate $\widetilde{O}(B_{\star} \sqrt{S A K})$, where $K$ is the number of episodes, $S$ is the number of states, $A$ is the number of actions and $B_{\star}$ bounds the expected cumulative cost of the optimal policy from any state, thus closing the gap with the lower bound. Interestingly, EB-SSP obtains this result while being parameter-free, i.e., it does not require any prior knowledge of $B_{\star}$, nor of $T_{\star}$ which bounds the expected time-to-goal of the optimal policy from any state. Furthermore, we illustrate various cases (e.g., positive costs, or general costs when an order-accurate estimate of $T_{\star}$ is available) where the regret only contains a logarithmic dependence on $T_{\star}$, thus yielding the first horizon-free regret bound beyond the finite-horizon MDP setting.

北京阿比特科技有限公司