亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the problem of testing for two Gibbs probabilities $\mu_0$ and $\mu_1$ defined for a dynamical system $(\Omega,T)$. Due to the fact that in general full orbits are not observable or computable, one needs to restrict to subclasses of tests defined by a finite time series $h(x_0), h(x_1)=h(T(x_0)),..., h(x_n)=h(T^n(x_0))$, $x_0\in \Omega$, $n\ge 0$, where $h:\Omega\to\mathbb R$ denotes a suitable measurable function. We determine in each class the Neyman-Pearson tests, the minimax tests, and the Bayes solutions and show the asymptotic decay of their risk functions as $n\to\infty$. In the case of $\Omega$ being a symbolic space, for each $n\in \mathbb{N}$, these optimal tests rely on the information of the measures for cylinder sets of size $n$.

相關內容

For $R\triangleq Mat_{m}(\mathbb{F})$, the ring of all the $m\times m$ matrices over the finite field $\mathbb{F}$ with $|\mathbb{F}|=q$, and the left $R$-module $A\triangleq Mat_{m,k}(\mathbb{F})$ with $m+1\leqslant k$, by deriving the minimal length of solutions of the related isometry equation, Dyshko has proved in \cite{3,4} that the minimal code length $n$ for $A^{n}$ not to satisfy the MacWilliams extension property with respect to Hamming weight is equal to $\prod_{i=1}^{m}(q^{i}+1)$. In this paper, using the M\"{o}bius functions, we derive the minimal length of nontrivial solutions of the isometry equation with respect to a finite lattice. For the finite vector space $\mathbf{H}\triangleq\prod_{i\in\Omega}\mathbb{F}^{k_{i}}$, a poset $\mathbf{P}=(\Omega,\preccurlyeq_{\mathbf{P}})$ and a map $\omega:\Omega\longrightarrow\mathbb{R}^{+}$ give rise to the $(\mathbf{P},\omega)$-weight on $\mathbf{H}$, which has been proposed by Hyun, Kim and Park in \cite{18}. For such a weight, we study the relations between the MacWilliams extension property and other properties including admitting MacWilliams identity, Fourier-reflexivity of involved partitions and Unique Decomposition Property defined for $(\mathbf{P},\omega)$. We give necessary and sufficient conditions for $\mathbf{H}$ to satisfy the MacWilliams extension property with the additional assumption that either $\mathbf{P}$ is hierarchical or $\omega$ is identically $1$, i.e., $(\mathbf{P},\omega)$-weight coincides with $\mathbf{P}$-weight, which further allow us to partly answer a conjecture proposed by Machado and Firer in \cite{22}.

Classical two-sample permutation tests for equality of distributions have exact size in finite samples, but they fail to control size for testing equality of parameters that summarize each distribution. This paper proposes permutation tests for equality of parameters that are estimated at root-$n$ or slower rates. Our general framework applies to both parametric and nonparametric models, with two samples or one sample split into two subsamples. Our tests have correct size asymptotically while preserving exact size in finite samples when distributions are equal. They have no loss in local-asymptotic power compared to tests that use asymptotic critical values. We propose confidence sets with correct coverage in large samples that also have exact coverage in finite samples if distributions are equal up to a transformation. We apply our theory to four commonly-used hypothesis tests of nonparametric functions evaluated at a point. Lastly, simulations show good finite sample properties, and two empirical examples illustrate our tests in practice.

In this work, we study a random orthogonal projection based least squares estimator for the stable solution of a multivariate nonparametric regression (MNPR) problem. More precisely, given an integer $d\geq 1$ corresponding to the dimension of the MNPR problem, a positive integer $N\geq 1$ and a real parameter $\alpha\geq -\frac{1}{2},$ we show that a fairly large class of $d-$variate regression functions are well and stably approximated by its random projection over the orthonormal set of tensor product $d-$variate Jacobi polynomials with parameters $(\alpha,\alpha).$ The associated uni-variate Jacobi polynomials have degree at most $N$ and their tensor products are orthonormal over $\mathcal U=[0,1]^d,$ with respect to the associated multivariate Jacobi weights. In particular, if we consider $n$ random sampling points $\mathbf X_i$ following the $d-$variate Beta distribution, with parameters $(\alpha+1,\alpha+1),$ then we give a relation involving $n, N, \alpha$ to ensure that the resulting $(N+1)^d\times (N+1)^d$ random projection matrix is well conditioned. Moreover, we provide squared integrated as well as $L^2-$risk errors of this estimator. Precise estimates of these errors are given in the case where the regression function belongs to an isotropic Sobolev space $H^s(I^d),$ with $s> \frac{d}{2}.$ Also, to handle the general and practical case of an unknown distribution of the $\mathbf X_i,$ we use Shepard's scattered interpolation scheme in order to generate fairly precise approximations of the observed data at $n$ i.i.d. sampling points $\mathbf X_i$ following a $d-$variate Beta distribution. Finally, we illustrate the performance of our proposed multivariate nonparametric estimator by some numerical simulations with synthetic as well as real data.

We further research on the accelerated optimization phenomenon on Riemannian manifolds by introducing accelerated global first-order methods for the optimization of $L$-smooth and geodesically convex (g-convex) or $\mu$-strongly g-convex functions defined on the hyperbolic space or a subset of the sphere. For a manifold other than the Euclidean space, these are the first methods to \emph{globally} achieve the same rates as accelerated gradient descent in the Euclidean space with respect to $L$ and $\varepsilon$ (and $\mu$ if it applies), up to log factors. Previous results with these accelerated rates only worked, given strong g-convexity, in a generally small neighborhood (initial distance $R$ to a minimizer being $R = O((\mu/L)^{3/4})$). Our rates have a polynomial factor on $1/\cos(R)$ (spherical case) or $\cosh(R)$ (hyperbolic case). Thus, we completely match the Euclidean case for a constant initial distance, and for larger $R$ we incur greater constants due to the geometry. As a proxy for our solution, we solve a constrained non-convex Euclidean problem, under a condition between convexity and \textit{quasar-convexity}, of independent interest. Additionally, for any Riemannian manifold of bounded sectional curvature, we provide reductions from optimization methods for smooth and g-convex functions to methods for smooth and strongly g-convex functions and vice versa.

We establish a new perturbation theory for orthogonal polynomials using a Riemann--Hilbert approach and consider applications in numerical linear algebra and random matrix theory. This new approach shows that the orthogonal polynomials with respect to two measures can be effectively compared using the difference of their Stieltjes transforms on a suitably chosen contour. Moreover, when two measures are close and satisfy some regularity conditions, we use the theta functions of a hyperelliptic Riemann surface to derive explicit and accurate expansion formulae for the perturbed orthogonal polynomials. In contrast to other approaches, a key strength of the methodology is that estimates can remain valid as the degree of the polynomial grows. The results are applied to analyze several numerical algorithms from linear algebra, including the Lanczos tridiagonalization procedure, the Cholesky factorization and the conjugate gradient algorithm. As a case study, we investigate these algorithms applied to a general spiked sample covariance matrix model by considering the eigenvector empirical spectral distribution and its limits. For the first time, we give precise estimates on the output of the algorithms, applied to this wide class of random matrices, as the number of iterations diverges. In this setting, beyond the first order expansion, we also derive a new mesoscopic central limit theorem for the associated orthogonal polynomials and other quantities relevant to numerical algorithms.

We study frequentist risk properties of predictive density estimators for mean mixtures of multivariate normal distributions, involving an unknown location parameter $\theta \in \mathbb{R}^d$, and which include multivariate skew normal distributions. We provide explicit representations for Bayesian posterior and predictive densities, including the benchmark minimum risk equivariant (MRE) density, which is minimax and generalized Bayes with respect to an improper uniform density for $\theta$. For four dimensions or more, we obtain Bayesian densities that improve uniformly on the MRE density under Kullback-Leibler loss. We also provide plug-in type improvements, investigate implications for certain type of parametric restrictions on $\theta$, and illustrate and comment the findings based on numerical evaluations.

Many problems in computational science and engineering can be described in terms of approximating a smooth function of $d$ variables, defined over an unknown domain of interest $\Omega\subset \mathbb{R}^d$, from sample data. Here both the curse of dimensionality ($d\gg 1$) and the lack of domain knowledge with $\Omega$ potentially irregular and/or disconnected are confounding factors for sampling-based methods. Na\"{i}ve approaches often lead to wasted samples and inefficient approximation schemes. For example, uniform sampling can result in upwards of 20\% wasted samples in some problems. In surrogate model construction in computational uncertainty quantification (UQ), the high cost of computing samples needs a more efficient sampling procedure. In the last years, methods for computing such approximations from sample data have been studied in the case of irregular domains. The advantages of computing sampling measures depending on an approximation space $P$ of $\dim(P)=N$ have been shown. In particular, such methods confer advantages such as stability and well-conditioning, with $\mathcal{O}(N\log(N))$ as sample complexity. The recently-proposed adaptive sampling for general domains (ASGD) strategy is one method to construct these sampling measures. The main contribution of this paper is to improve ASGD by adaptively updating the sampling measures over unknown domains. We achieve this by first introducing a general domain adaptivity strategy (GDAS), which approximates the function and domain of interest from sample points. Second, we propose adaptive sampling for unknown domains (ASUD), which generates sampling measures over a domain that may not be known in advance. Then, we derive least squares techniques for polynomial approximation on unknown domains. Numerical results show that the ASUD approach can reduce the computational cost by as 50\% when compared with uniform sampling.

In distributional reinforcement learning not only expected returns but the complete return distributions of a policy is taken into account. The return distribution for a fixed policy is given as the fixed point of an associated distributional Bellman operator. In this note we consider general distributional Bellman operators and study existence and uniqueness of its fixed points as well as their tail properties. We give necessary and sufficient conditions for existence and uniqueness of return distributions and identify cases of regular variation. We link distributional Bellman equations to multivariate distributional equations of the form $\textbf{X} =_d \textbf{A}\textbf{X} + \textbf{B}$, where $\textbf{X}$ and $\textbf{B}$ are $d$-dimensional random vectors, $\textbf{A}$ a random $d\times d$ matrix and $\textbf{X}$ and $(\textbf{A},\textbf{B})$ are independent. We show that any fixed-point of a distributional Bellman operator can be obtained as the vector of marginal laws of a solution to such a multivariate distributional equation. This makes the general theory of such equations applicable to the distributional reinforcement learning setting.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司