亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We establish estimates on the error made by the Deep Ritz Method for elliptic problems on the space $H^1(\Omega)$ with different boundary conditions. For Dirichlet boundary conditions, we estimate the error when the boundary values are approximately enforced through the boundary penalty method. Our results apply to arbitrary and in general non linear classes $V\subseteq H^1(\Omega)$ of ansatz functions and estimate the error in dependence of the optimization accuracy, the approximation capabilities of the ansatz class and -- in the case of Dirichlet boundary values -- the penalisation strength $\lambda$. For non-essential boundary conditions the error of the Ritz method decays with the same rate as the approximation rate of the ansatz classes. For essential boundary conditions, given an approximation rate of $r$ in $H^1(\Omega)$ and an approximation rate of $s$ in $L^2(\partial\Omega)$ of the ansatz classes, the optimal decay rate of the estimated error is $\min(s/2, r)$ and achieved by choosing $\lambda_n\sim n^{s}$. We discuss the implications for ansatz classes which are given through ReLU networks and the relation to existing estimates for finite element functions.

相關內容

This work deals with the a posteriori error estimates for the Darcy-Forchheimer problem. We introduce the corresponding variational formulation and discretize it by using the finite-element method. A posteriori error estimate with two types of computable error indicators is showed. The first one is linked to the linearization and the second one to the discretization. Finally, numerical computations are performed to show the effectiveness of the error indicators.

Many descent methods for multiobjective optimization problems have been developed in recent years. In 2000, the steepest descent method was proposed for differentiable multiobjective optimization problems. Afterward, the proximal gradient method, which can solve composite problems, was also considered. However, the accelerated versions are not sufficiently studied. In this paper, we propose a multiobjective accelerated proximal gradient algorithm, in which we solve subproblems with terms that only appear in the multiobjective case. We also show the proposed method's global convergence rate ($O(1/k^2)$) under reasonable assumptions, using a merit function to measure the complexity. Moreover, we present an efficient way to solve the subproblem via its dual, and we confirm the validity of the proposed method through preliminary numerical experiments.

In this paper, we study the sample complexity of {\em noisy Bayesian quadrature} (BQ), in which we seek to approximate an integral based on noisy black-box queries to the underlying function. We consider functions in a {\em Reproducing Kernel Hilbert Space} (RKHS) with the Mat\'ern-$\nu$ kernel, focusing on combinations of the parameter $\nu$ and dimension $d$ such that the RKHS is equivalent to a Sobolev class. In this setting, we provide near-matching upper and lower bounds on the best possible average error. Specifically, we find that when the black-box queries are subject to Gaussian noise having variance $\sigma^2$, any algorithm making at most $T$ queries (even with adaptive sampling) must incur a mean absolute error of $\Omega(T^{-\frac{\nu}{d}-1} + \sigma T^{-\frac{1}{2}})$, and there exists a non-adaptive algorithm attaining an error of at most $O(T^{-\frac{\nu}{d}-1} + \sigma T^{-\frac{1}{2}})$. Hence, the bounds are order-optimal, and establish that there is no adaptivity gap in terms of scaling laws.

In this paper, we consider Strassen's version of optimal transport (OT) problem, which concerns minimizing the excess-cost probability (i.e., the probability that the cost is larger than a given value) over all couplings of two given distributions. We derive large deviation, moderate deviation, and central limit theorems for this problem. Our proof is based on Strassen's dual formulation of the OT problem, Sanov's theorem on the large deviation principle (LDP) of empirical measures, as well as the moderate deviation principle (MDP) and central limit theorems (CLT) of empirical measures. In order to apply the LDP, MDP, and CLT to Strassen's OT problem, nested formulas for Strassen's OT problem are derived. Based on these nested formulas and using a splitting technique, we construct asymptotically optimal solutions to Strassen's OT problem and its dual formulation.

This paper studies the approximation capacity of ReLU neural networks with norm constraint on the weights. We prove upper and lower bounds on the approximation error of these networks for smooth function classes. The lower bound is derived through the Rademacher complexity of neural networks, which may be of independent interest. We apply these approximation bounds to analyze the convergence of regression using norm constrained neural networks and distribution estimation by GANs. In particular, we obtain convergence rates for over-parameterized neural networks. It is also shown that GANs can achieve optimal rate of learning probability distributions, when the discriminator is a properly chosen norm constrained neural network.

This paper is devoted to the numerical analysis of a piecewise constant discontinuous Galerkin method for time fractional subdiffusion problems. The regularity of weak solution is firstly established by using variational approach and Mittag-Leffler function. Then several optimal error estimates are derived with low regularity data. Finally, numerical experiments are conducted to verify the theoretical results.

The scope of this paper is the analysis and approximation of an optimal control problem related to the Allen-Cahn equation. A tracking functional is minimized subject to the Allen-Cahn equation using distributed controls that satisfy point-wise control constraints. First and second order necessary and sufficient conditions are proved. The lowest order discontinuous Galerkin - in time - scheme is considered for the approximation of the control to state and adjoint state mappings. Under a suitable restriction on maximum size of the temporal and spatial discretization parameters $k$, $h$ respectively in terms of the parameter $\epsilon$ that describes the thickness of the interface layer, a-priori estimates are proved with constants depending polynomially upon $1/ \epsilon$. Unlike to previous works for the uncontrolled Allen-Cahn problem our approach does not rely on a construction of an approximation of the spectral estimate, and as a consequence our estimates are valid under low regularity assumptions imposed by the optimal control setting. These estimates are also valid in cases where the solution and its discrete approximation do not satisfy uniform space-time bounds independent of $\epsilon$. These estimates and a suitable localization technique, via the second order condition (see \cite{Arada-Casas-Troltzsch_2002,Casas-Mateos-Troltzsch_2005,Casas-Raymond_2006,Casas-Mateos-Raymond_2007}), allows to prove error estimates for the difference between local optimal controls and their discrete approximation as well as between the associated state and adjoint state variables and their discrete approximations

The statistical finite element method (StatFEM) is an emerging probabilistic method that allows observations of a physical system to be synthesised with the numerical solution of a PDE intended to describe it in a coherent statistical framework, to compensate for model error. This work presents a new theoretical analysis of the statistical finite element method demonstrating that it has similar convergence properties to the finite element method on which it is based. Our results constitute a bound on the Wasserstein-2 distance between the ideal prior and posterior and the StatFEM approximation thereof, and show that this distance converges at the same mesh-dependent rate as finite element solutions converge to the true solution. Several numerical examples are presented to demonstrate our theory, including an example which test the robustness of StatFEM when extended to nonlinear quantities of interest.

Ensemble methods based on subsampling, such as random forests, are popular in applications due to their high predictive accuracy. Existing literature views a random forest prediction as an infinite-order incomplete U-statistic to quantify its uncertainty. However, these methods focus on a small subsampling size of each tree, which is theoretically valid but practically limited. This paper develops an unbiased variance estimator based on incomplete U-statistics, which allows the tree size to be comparable with the overall sample size, making statistical inference possible in a broader range of real applications. Simulation results demonstrate that our estimators enjoy lower bias and more accurate confidence interval coverage without additional computational costs. We also propose a local smoothing procedure to reduce the variation of our estimator, which shows improved numerical performance when the number of trees is relatively small. Further, we investigate the ratio consistency of our proposed variance estimator under specific scenarios. In particular, we develop a new "double U-statistic" formulation to analyze the Hoeffding decomposition of the estimator's variance.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

北京阿比特科技有限公司