亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Although there is an extensive literature on the maxima of Gaussian processes, there are relatively few non-asymptotic bounds on their lower-tail probabilities. The aim of this paper is to develop such a bound, while also allowing for many types of dependence. Let $(\xi_1,\dots,\xi_N)$ be a centered Gaussian vector with standardized entries, whose correlation matrix $R$ satisfies $\max_{i\neq j} R_{ij}\leq \rho_0$ for some constant $\rho_0\in (0,1)$. Then, for any $\epsilon_0\in(0,\sqrt{1-\rho_0})$, we establish an upper bound on the probability $\mathbb{P}(\max_{1\leq j\leq N} \xi_j\leq \epsilon_0\sqrt{2\log(N)})$ in terms of $(\rho_0,\epsilon_0,N)$. The bound is also sharp, in the sense that it is attained up to a constant, independent of $N$. Next, we apply this result in the context of high-dimensional statistics, where we simplify and weaken conditions that have recently been used to establish near-parametric rates of bootstrap approximation. Lastly, an interesting aspect of this application is that it makes use of recent refinements of Bourgain and Tzafriri's "restricted invertibility principle".

相關內容

We consider the approximation of the inverse square root of regularly accretive operators in Hilbert spaces. The approximation is of rational type and comes from the use of the Gauss-Legendre rule applied to a special integral formulation of the problem. We derive sharp error estimates, based on the use of the numerical range, and provide some numerical experiments. For practical purposes, the finite dimensional case is also considered. In this setting, the convergence is shown to be of exponential type.

We provide a new theory for nodewise regression when the residuals from a fitted factor model are used. We apply our results to the analysis of the consistency of Sharpe ratio estimators when there are many assets in a portfolio. We allow for an increasing number of assets as well as time observations of the portfolio. Since the nodewise regression is not feasible due to the unknown nature of idiosyncratic errors, we provide a feasible-residual-based nodewise regression to estimate the precision matrix of errors which is consistent even when number of assets, p, exceeds the time span of the portfolio, n. In another new development, we also show that the precision matrix of returns can be estimated consistently, even with an increasing number of factors and p>n. We show that: (1) with p>n, the Sharpe ratio estimators are consistent in global minimum-variance and mean-variance portfolios; and (2) with p>n, the maximum Sharpe ratio estimator is consistent when the portfolio weights sum to one; and (3) with p<<n, the maximum-out-of-sample Sharpe ratio estimator is consistent.

Adaptive spectral (AS) decompositions associated with a piecewise constant function $u$ yield small subspaces where the characteristic functions comprising $u$ are well approximated. When combined with Newton-like optimization methods for the solution of inverse medium problems, AS decompositions have proved remarkably efficient in providing at each nonlinear iteration a low-dimensional search space. Here, we derive $L^2$-error estimates for the AS decomposition of $u$, truncated after $K$ terms, when $u$ is piecewise constant and consists of $K$ characteristic functions over Lipschitz domains and a background. Our estimates apply both to the continuous and the discrete Galerkin finite element setting. Numerical examples illustrate the accuracy of the AS decomposition for media that either do, or do not, satisfy the assumptions of the theory.

Many functions have approximately-known upper and/or lower bounds, potentially aiding the modeling of such functions. In this paper, we introduce Gaussian process models for functions where such bounds are (approximately) known. More specifically, we propose the first use of such bounds to improve Gaussian process (GP) posterior sampling and Bayesian optimization (BO). That is, we transform a GP model satisfying the given bounds, and then sample and weight functions from its posterior. To further exploit these bounds in BO settings, we present bounded entropy search (BES) to select the point gaining the most information about the underlying function, estimated by the GP samples, while satisfying the output constraints. We characterize the sample variance bounds and show that the decision made by BES is explainable. Our proposed approach is conceptually straightforward and can be used as a plug in extension to existing methods for GP posterior sampling and Bayesian optimization.

We consider the problem of minimizing regret in an $N$ agent heterogeneous stochastic linear bandits framework, where the agents (users) are similar but not all identical. We model user heterogeneity using two popularly used ideas in practice; (i) A clustering framework where users are partitioned into groups with users in the same group being identical to each other, but different across groups, and (ii) a personalization framework where no two users are necessarily identical, but a user's parameters are close to that of the population average. In the clustered users' setup, we propose a novel algorithm, based on successive refinement of cluster identities and regret minimization. We show that, for any agent, the regret scales as $\mathcal{O}(\sqrt{T/N})$, if the agent is in a `well separated' cluster, or scales as $\mathcal{O}(T^{\frac{1}{2} + \varepsilon}/(N)^{\frac{1}{2} -\varepsilon})$ if its cluster is not well separated, where $\varepsilon$ is positive and arbitrarily close to $0$. Our algorithm is adaptive to the cluster separation, and is parameter free -- it does not need to know the number of clusters, separation and cluster size, yet the regret guarantee adapts to the inherent complexity. In the personalization framework, we introduce a natural algorithm where, the personal bandit instances are initialized with the estimates of the global average model. We show that, an agent $i$ whose parameter deviates from the population average by $\epsilon_i$, attains a regret scaling of $\widetilde{O}(\epsilon_i\sqrt{T})$. This demonstrates that if the user representations are close (small $\epsilon_i)$, the resulting regret is low, and vice-versa. The results are empirically validated and we observe superior performance of our adaptive algorithms over non-adaptive baselines.

Analysing statistical properties of neural networks is a central topic in statistics and machine learning. However, most results in the literature focus on the properties of the neural network minimizing the training error. The goal of this paper is to consider aggregated neural networks using a Gaussian prior. The departure point of our approach is an arbitrary aggregate satisfying the PAC-Bayesian inequality. The main contribution is a precise nonasymptotic assessment of the estimation error appearing in the PAC-Bayes bound. We also review available bounds on the error of approximating a function by a neural network. Combining bounds on estimation and approximation errors, we establish risk bounds that are sharp enough to lead to minimax rates of estimation over Sobolev smoothness classes.

We present a general family of subcell limiting strategies to construct robust high-order accurate nodal discontinuous Galerkin (DG) schemes. The main strategy is to construct compatible low order finite volume (FV) type discretizations that allow for convex blending with the high-order variant with the goal of guaranteeing additional properties, such as bounds on physical quantities and/or guaranteed entropy dissipation. For an implementation of this main strategy, four main ingredients are identified that may be combined in a flexible manner: (i) a nodal high-order DG method on Legendre-Gauss-Lobatto nodes, (ii) a compatible robust subcell FV scheme, (iii) a convex combination strategy for the two schemes, which can be element-wise or subcell-wise, and (iv) a strategy to compute the convex blending factors, which can be either based on heuristic troubled-cell indicators, or using ideas from flux-corrected transport methods. By carefully designing the metric terms of the subcell FV method, the resulting methods can be used on unstructured curvilinear meshes, are locally conservative, can handle strong shocks efficiently while directly guaranteeing physical bounds on quantities such as density, pressure or entropy. We further show that it is possible to choose the four ingredients to recover existing methods such as a provably entropy dissipative subcell shock-capturing approach or a sparse invariant domain preserving approach. We test the versatility of the presented strategies and mix and match the four ingredients to solve challenging simulation setups, such as the KPP problem (a hyperbolic conservation law with non-convex flux function), turbulent and hypersonic Euler simulations, and MHD problems featuring shocks and turbulence.

Normalizing Flows (NFs) are universal density estimators based on Neural Networks. However, this universality is limited: the density's support needs to be diffeomorphic to a Euclidean space. In this paper, we propose a novel method to overcome this limitation without sacrificing universality. The proposed method inflates the data manifold by adding noise in the normal space, trains an NF on this inflated manifold, and, finally, deflates the learned density. Our main result provides sufficient conditions on the manifold and the specific choice of noise under which the corresponding estimator is exact. Our method has the same computational complexity as NFs and does not require computing an inverse flow. We also show that, if the embedding dimension is much larger than the manifold dimension, noise in the normal space can be well approximated by Gaussian noise. This allows using our method for approximating arbitrary densities on unknown manifolds provided that the manifold dimension is known.

We propose confidence regions with asymptotically correct uniform coverage probability of parameters whose Fisher information matrix can be singular at important points of the parameter set. Our work is motivated by the need for reliable inference on scale parameters close or equal to zero in mixed models, which is obtained as a special case. The confidence regions are constructed by inverting a continuous extension of the score test statistic standardized by expected information, which we show exists at points of singular information under regularity conditions. Similar results have previously only been obtained for scalar parameters, under conditions stronger than ours, and applications to mixed models have not been considered. In simulations our confidence regions have near-nominal coverage with as few as $n = 20$ independent observations, regardless of how close to the boundary the true parameter is. It is a corollary of our main results that the proposed test statistic has an asymptotic chi-square distribution with degrees of freedom equal to the number of tested parameters, even if they are on the boundary of the parameter set.

Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility.

北京阿比特科技有限公司