亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We prove absolute regularity ($\beta$-mixing) for nonstationary and multivariate versions of two popular classes of integer-valued processes. We show how this result can be used to prove asymptotic normality of a least squares estimator of an involved model parameter.

相關內容

 Processing 是一門開源編程語言和與之配套的集成開發環境(IDE)的名稱。Processing 在電子藝術和視覺設計社區被用來教授編程基礎,并運用于大量的新媒體和互動藝術作品中。

A general theory of efficient estimation for ergodic diffusion processes sampled at high frequency with an infinite time horizon is presented. High frequency sampling is common in many applications, with finance as a prominent example. The theory is formulated in term of approximate martingale estimating functions and covers a large class of estimators including most of the previously proposed estimators for diffusion processes. Easily checked conditions ensuring that an estimating function is an approximate martingale are derived, and general conditions ensuring consistency and asymptotic normality of estimators are given. Most importantly, simple conditions are given that ensure rate optimality and efficiency. Rate optimal estimators of parameters in the diffusion coefficient converge faster than estimators of drift coefficient parameters because they take advantage of the information in the quadratic variation. The conditions facilitate the choice among the multitude of estimators that have been proposed for diffusion models. Optimal martingale estimating functions in the sense of Godambe and Heyde and their high frequency approximations are, under weak conditions, shown to satisfy the conditions for rate optimality and efficiency. This provides a natural feasible method of constructing explicit rate optimal and efficient estimating functions by solving a linear equation.

We address the numerical treatment of source terms in algebraic flux correction schemes for steady convection-diffusion-reaction (CDR) equations. The proposed algorithm constrains a continuous piecewise-linear finite element approximation using a monolithic convex limiting (MCL) strategy. Failure to discretize the convective derivatives and source terms in a compatible manner produces spurious ripples, e.g., in regions where the coefficients of the continuous problem are constant and the exact solution is linear. We cure this deficiency by incorporating source term components into the fluxes and intermediate states of the MCL procedure. The design of our new limiter is motivated by the desire to preserve simple steady-state equilibria exactly, as in well-balanced schemes for the shallow water equations. The results of our numerical experiments for two-dimensional CDR problems illustrate potential benefits of well-balanced flux limiting in the scalar case.

We develop and compare e-variables for testing whether $k$ samples of data are drawn from the same distribution, the alternative being that they come from different elements of an exponential family. We consider the GRO (growth-rate optimal) e-variables for (1) a `small' null inside the same exponential family, and (2) a `large' nonparametric null, as well as (3) an e-variable arrived at by conditioning on the sum of the sufficient statistics. (2) and (3) are efficiently computable, and extend ideas from Turner et al. [2021] and Wald [1947] respectively from Bernoulli to general exponential families. We provide theoretical and simulation-based comparisons of these e-variables in terms of their logarithmic growth rate, and find that for small effects all four e-variables behave surprisingly similarly; for the Gaussian location and Poisson families, e-variables (1) and (3) coincide; for Bernoulli, (1) and (2) coincide; but in general, whether (2) or (3) grows faster under the alternative is family-dependent. We furthermore discuss algorithms for numerically approximating (1).

The theory of generalized locally Toeplitz (GLT) sequences is a powerful apparatus for computing the asymptotic spectral distribution of matrices $A_n$ arising from numerical discretizations of differential equations. Indeed, when the mesh fineness parameter $n$ tends to infinity, these matrices $A_n$ give rise to a sequence $\{A_n\}_n$, which often turns out to be a GLT sequence. In this paper, we extend the theory of GLT sequences in several directions: we show that every GLT sequence enjoys a normal form, we identify the spectral symbol of every GLT sequence formed by normal matrices, and we prove that, for every GLT sequence $\{A_n\}_n$ formed by normal matrices and every continuous function $f:\mathbb C\to\mathbb C$, the sequence $\{f(A_n)\}_n$ is again a GLT sequence whose spectral symbol is $f(\kappa)$, where $\kappa$ is the spectral symbol of $\{A_n\}_n$. In addition, using the theory of GLT sequences, we prove a spectral distribution result for perturbed normal matrices.

One tuple of probability vectors is more informative than another tuple when there exists a single stochastic matrix transforming the probability vectors of the first tuple into the probability vectors of the other. This is called matrix majorization. Solving an open problem raised by Mu et al, we show that if certain monotones - namely multivariate extensions of R\'{e}nyi divergences - are strictly ordered between the two tuples, then for sufficiently large $n$, there exists a stochastic matrix taking the $n$-fold Kronecker power of each input distribution to the $n$-fold Kronecker power of the corresponding output distribution. The same conditions, with non-strict ordering for the monotones, are also necessary for such matrix majorization in large samples. Our result also gives conditions for the existence of a sequence of statistical maps that asymptotically (with vanishing error) convert a single copy of each input distribution to the corresponding output distribution with the help of a catalyst that is returned unchanged. Allowing for transformation with arbitrarily small error, we find conditions that are both necessary and sufficient for such catalytic matrix majorization. We derive our results by building on a general algebraic theory of preordered semirings recently developed by one of the authors. This also allows us to recover various existing results on majorization in large samples and in the catalytic regime as well as relative majorization in a unified manner.

It is known that standard stochastic Galerkin methods encounter challenges when solving partial differential equations with high-dimensional random inputs, which are typically caused by the large number of stochastic basis functions required. It becomes crucial to properly choose effective basis functions, such that the dimension of the stochastic approximation space can be reduced. In this work, we focus on the stochastic Galerkin approximation associated with generalized polynomial chaos (gPC), and explore the gPC expansion based on the analysis of variance (ANOVA) decomposition. A concise form of the gPC expansion is presented for each component function of the ANOVA expansion, and an adaptive ANOVA procedure is proposed to construct the overall stochastic Galerkin system. Numerical results demonstrate the efficiency of our proposed adaptive ANOVA stochastic Galerkin method for both diffusion and Helmholtz problems.

We establish optimal error bounds on time-splitting methods for the nonlinear Schr\"odinger equation with low regularity potential and typical power-type nonlinearity $ f(\rho) = \rho^\sigma $, where $ \rho:=|\psi|^2 $ is the density with $ \psi $ the wave function and $ \sigma > 0 $ the exponent of the nonlinearity. For the first-order Lie-Trotter time-splitting method, optimal $ L^2 $-norm error bound is proved for $L^\infty$-potential and $ \sigma > 0 $, and optimal $H^1$-norm error bound is obtained for $ W^{1, 4} $-potential and $ \sigma \geq 1/2 $. For the second-order Strang time-splitting method, optimal $ L^2 $-norm error bound is established for $H^2$-potential and $ \sigma \geq 1 $, and optimal $H^1$-norm error bound is proved for $H^3$-potential and $ \sigma \geq 3/2 $ (or $\sigma = 1$). Compared to those error estimates of time-splitting methods in the literature, our optimal error bounds either improve the convergence rates under the same regularity assumptions or significantly relax the regularity requirements on potential and nonlinearity for optimal convergence orders. A key ingredient in our proof is to adopt a new technique called \textit{regularity compensation oscillation} (RCO), where low frequency modes are analyzed by phase cancellation, and high frequency modes are estimated by regularity of the solution. Extensive numerical results are reported to confirm our error estimates and to demonstrate that they are sharp.

By using the stochastic particle method, the truncated Euler-Maruyama (TEM) method is proposed for numerically solving McKean-Vlasov stochastic differential equations (MV-SDEs), possibly with both drift and diffusion coefficients having super-linear growth in the state variable. Firstly, the result of the propagation of chaos in the L^q (q\geq 2) sense is obtained under general assumptions. Then, the standard 1/2-order strong convergence rate in the L^q sense of the proposed method corresponding to the particle system is derived by utilizing the stopping time analysis technique. Furthermore, long-time dynamical properties of MV-SDEs, including the moment boundedness, stability, and the existence and uniqueness of the invariant probability measure, can be numerically realized by the TEM method. Additionally, it is proven that the numerical invariant measure converges to the underlying one of MV-SDEs in the L^2-Wasserstein metric. Finally, the conclusions obtained in this paper are verified through examples and numerical simulations.

This paper proposes several approaches as baselines to compute a shared active subspace for multivariate vector-valued functions. The goal is to minimize the deviation between the function evaluations on the original space and those on the reconstructed one. This is done either by manipulating the gradients or the symmetric positive (semi-)definite (SPD) matrices computed from the gradients of each component function so as to get a single structure common to all component functions. These approaches can be applied to any data irrespective of the underlying distribution unlike the existing vector-valued approach that is constrained to a normal distribution. We test the effectiveness of these methods on five optimization problems. The experiments show that, in general, the SPD-level methods are superior to the gradient-level ones, and are close to the vector-valued approach in the case of a normal distribution. Interestingly, in most cases it suffices to take the sum of the SPD matrices to identify the best shared active subspace.

We study least-squares trace regression when the parameter is the sum of a $r$-low-rank matrix and a $s$-sparse matrix and a fraction $\epsilon$ of the labels is corrupted. For subgaussian distributions and feature-dependent noise, we highlight three needed design properties, each one derived from a different process inequality: a "product process inequality", "Chevet's inequality" and a "multiplier process inequality". These properties handle, simultaneously, additive decomposition, label contamination and design-noise interaction. They imply the near-optimality of a tractable estimator with respect to the effective dimensions $d_{eff,r}$ and $d_{eff,s}$ of the low-rank and sparse components, $\epsilon$ and the failure probability $\delta$. The near-optimal rate is $\mathsf{r}(n,d_{eff,r}) + \mathsf{r}(n,d_{eff,s}) + \sqrt{(1+\log(1/\delta))/n} + \epsilon\log(1/\epsilon)$, where $\mathsf{r}(n,d_{eff,r})+\mathsf{r}(n,d_{eff,s})$ is the optimal rate in average with no contamination. Our estimator is adaptive to $(s,r,\epsilon,\delta)$ and, for fixed absolute constant $c>0$, it attains the mentioned rate with probability $1-\delta$ uniformly over all $\delta\ge\exp(-cn)$. Without matrix decomposition, our analysis also entails optimal bounds for a robust estimator adapted to the noise variance. Our estimators are based on "sorted" versions of Huber's loss. We present simulations matching the theory. In particular, it reveals the superiority of "sorted" Huber's losses over the classical Huber's loss.

北京阿比特科技有限公司