亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work, we develop a numerical method to study the error estimates of the $\alpha$-stable central limit theorem under sublinear expectation with $\alpha \in(0,2)$, whose limit distribution can be characterized by a fully nonlinear integro-differential equation (PIDE). Based on the sequence of independent random variables, we propose a discrete approximation scheme for the fully nonlinear PIDE. With the help of the nonlinear stochastic analysis techniques and numerical analysis tools, we establish the error bounds for the discrete approximation scheme, which in turn provides a general error bound for the robust $\alpha$-stable central limit theorem, including the integrable case $\alpha \in(1,2)$ as well as the non-integrable case $\alpha \in(0,1]$. Finally, we provide some concrete examples to illustrate our main results and derive the precise convergence rates.

相關內容

We introduce a new Langevin dynamics based algorithm, called e-TH$\varepsilon$O POULA, to solve optimization problems with discontinuous stochastic gradients which naturally appear in real-world applications such as quantile estimation, vector quantization, CVaR minimization, and regularized optimization problems involving ReLU neural networks. We demonstrate both theoretically and numerically the applicability of the e-TH$\varepsilon$O POULA algorithm. More precisely, under the conditions that the stochastic gradient is locally Lipschitz in average and satisfies a certain convexity at infinity condition, we establish non-asymptotic error bounds for e-TH$\varepsilon$O POULA in Wasserstein distances and provide a non-asymptotic estimate for the expected excess risk, which can be controlled to be arbitrarily small. Three key applications in finance and insurance are provided, namely, multi-period portfolio optimization, transfer learning in multi-period portfolio optimization, and insurance claim prediction, which involve neural networks with (Leaky)-ReLU activation functions. Numerical experiments conducted using real-world datasets illustrate the superior empirical performance of e-TH$\varepsilon$O POULA compared to SGLD, TUSLA, ADAM, and AMSGrad in terms of model accuracy.

In this article, we consider the problem of approximating a finite set of data (usually huge in applications) by invariant subspaces generated through a small set of smooth functions. The invariance is either by translations under a full-rank lattice or through the action of crystallographic groups. Smoothness is ensured by stipulating that the generators belong to a Paley-Wiener space, that is selected in an optimal way based on the characteristics of the given data. To complete our investigation, we analyze the fundamental role played by the lattice in the process of approximation.

Partial differential equations with highly oscillatory input terms are hardly ever solvable analytically and their numerical treatment is difficult. Modulated Fourier expansion used as an {\it ansatz} is a well known and extensively investigated tool in asymptotic numerical approach for this kind of problems. Although the efficiency of this approach has been recognised, its error analysis has not been investigated rigorously for general forms of linear PDEs. In this paper, we start such kind of investigations for a general form of linear PDEs with an input term characterised by a single high frequency. More precisely we derive an analytical form of such an expansion and provide a formula for the error of its truncation. Theoretical investigations are illustrated by computational simulations.

In this paper, we consider the numerical solution of a nonlinear Schrodinger equation with spatial random potential. The randomly shifted quasi-Monte Carlo (QMC) lattice rule combined with the time-splitting pseudospectral discretization is applied and analyzed. The nonlinearity in the equation induces difficulties in estimating the regularity of the solution in random space. By the technique of weighted Sobolev space, we identify the possible weights and show the existence of QMC that converges optimally at the almost-linear rate without dependence on dimensions. The full error estimate of the scheme is established. We present numerical results to verify the accuracy and investigate the wave propagation.

We prove that the discrete Laplace operator has a bounded $ H^\infty$-calculus,independent of the spatial mesh size. As an application, we obtain the discrete stochastic maximal $ L^p $-regularity estimate for a spatial semidiscretization of a stochastic parabolic equation. In addition, we derive some (nearly) sharp error estimates for this spatial semidiscretization.

We prove and collect numerous explicit and computable results for the fractional Laplacian $(-\Delta)^s f(x)$ with $s>0$ as well as its whole space inverse, the Riesz potential, $(-\Delta)^{-s}f(x)$ with $s\in\left(0,\frac{1}{2}\right)$. Choices of $f(x)$ include weighted classical orthogonal polynomials such as the Legendre, Chebyshev, Jacobi, Laguerre and Hermite polynomials, or first and second kind Bessel functions with or without sinusoid weights. Some higher dimensional fractional Laplacians and Riesz potentials of generalized Zernike polynomials on the unit ball and its complement as well as whole space generalized Laguerre polynomials are also discussed. The aim of this paper is to aid in the continued development of numerical methods for problems involving the fractional Laplacian or the Riesz potential in bounded and unbounded domains -- both directly by providing useful basis or frame functions for spectral method approaches and indirectly by providing accessible ways to construct computable toy problems on which to test new numerical methods.

Given an integer partition of $n$, we consider the impartial combinatorial game LCTR in which moves consist of removing either the left column or top row of its Young diagram. We show that for both normal and mis\`ere play, the optimal strategy can consist mostly of mirroring the opponent's moves. We also establish that both LCTR and Downright are domestic as well as returnable, and on the other hand neither tame nor forced. For both games, those structural observations allow for computing the Sprague-Grundy value any position in $O(\log(n))$ time, assuming that the time unit allows for reading an integer, or performing a basic arithmetic operation. This improves on the previously known bound of $O(n)$ due to Ili\'c (2019). We also cover some other complexity measures of both games, such as state-space complexity, and number of leaves and nodes in the corresponding game tree.

We offer an alternative proof, using the Stein-Chen method, of Bollob\'{a}s' theorem concerning the distribution of the extreme degrees of a random graph. Our proof also provides a rate of convergence of the extreme degree to its asymptotic distribution. The same method also applies in a more general setting where the probability of every pair of vertices being connected by edges depends on the number of vertices.

Recent extensive numerical experiments in high scale machine learning have allowed to uncover a quite counterintuitive phase transition, as a function of the ratio between the sample size and the number of parameters in the model. As the number of parameters $p$ approaches the sample size $n$, the generalisation error increases, but surprisingly, it starts decreasing again past the threshold $p=n$. This phenomenon, brought to the theoretical community attention in \cite{belkin2019reconciling}, has been thoroughly investigated lately, more specifically for simpler models than deep neural networks, such as the linear model when the parameter is taken to be the minimum norm solution to the least-squares problem, firstly in the asymptotic regime when $p$ and $n$ tend to infinity, see e.g. \cite{hastie2019surprises}, and recently in the finite dimensional regime and more specifically for linear models \cite{bartlett2020benign}, \cite{tsigler2020benign}, \cite{lecue2022geometrical}. In the present paper, we propose a finite sample analysis of non-linear models of \textit{ridge} type, where we investigate the \textit{overparametrised regime} of the double descent phenomenon for both the \textit{estimation problem} and the \textit{prediction} problem. Our results provide a precise analysis of the distance of the best estimator from the true parameter as well as a generalisation bound which complements recent works of \cite{bartlett2020benign} and \cite{chinot2020benign}. Our analysis is based on tools closely related to the continuous Newton method \cite{neuberger2007continuous} and a refined quantitative analysis of the performance in prediction of the minimum $\ell_2$-norm solution.

Learning distance functions between complex objects, such as the Wasserstein distance to compare point sets, is a common goal in machine learning applications. However, functions on such complex objects (e.g., point sets and graphs) are often required to be invariant to a wide variety of group actions e.g. permutation or rigid transformation. Therefore, continuous and symmetric product functions (such as distance functions) on such complex objects must also be invariant to the product of such group actions. We call these functions symmetric and factor-wise group invariant (or SFGI functions in short). In this paper, we first present a general neural network architecture for approximating SFGI functions. The main contribution of this paper combines this general neural network with a sketching idea to develop a specific and efficient neural network which can approximate the $p$-th Wasserstein distance between point sets. Very importantly, the required model complexity is independent of the sizes of input point sets. On the theoretical front, to the best of our knowledge, this is the first result showing that there exists a neural network with the capacity to approximate Wasserstein distance with bounded model complexity. Our work provides an interesting integration of sketching ideas for geometric problems with universal approximation of symmetric functions. On the empirical front, we present a range of results showing that our newly proposed neural network architecture performs comparatively or better than other models (including a SOTA Siamese Autoencoder based approach). In particular, our neural network generalizes significantly better and trains much faster than the SOTA Siamese AE. Finally, this line of investigation could be useful in exploring effective neural network design for solving a broad range of geometric optimization problems (e.g., $k$-means in a metric space).

北京阿比特科技有限公司