亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a new iterative scheme to compute the numerical solution to an over-determined boundary value problem for a general quasilinear elliptic PDE. The main idea is to repeatedly solve its linearization by using the quasi-reversibility method with a suitable Carleman weight function. The presence of the Carleman weight function allows us to employ a Carleman estimate to prove the convergence of the sequence generated by the iterative scheme above to the desired solution. The convergence of the iteration is fast at an exponential rate without the need of an initial good guess. We apply this method to compute solutions to some general quasilinear elliptic equations and a large class of first-order Hamilton-Jacobi equations. Numerical results are presented.

相關內容

In this paper, we propose a novel sequential data-driven method for dealing with equilibrium based chemical simulations, which can be seen as a specific machine learning approach called active learning. The underlying idea of our approach is to consider the function to estimate as a sample of a Gaussian process which allows us to compute the global uncertainty on the function estimation. Thanks to this estimation and with almost no parameter to tune, the proposed method sequentially chooses the most relevant input data at which the function to estimate has to be evaluated to build a surrogate model. Hence, the number of evaluations of the function to estimate is dramatically limited. Our active learning method is validated through numerical experiments and applied to a complex chemical system commonly used in geoscience.

We develop a weak-form sparse identification method for interacting particle systems (IPS) with the primary goals of reducing computational complexity for large particle number $N$ and offering robustness to either intrinsic or extrinsic noise. In particular, we use concepts from mean-field theory of IPS in combination with the weak-form sparse identification of nonlinear dynamics algorithm (WSINDy) to provide a fast and reliable system identification scheme for recovering the governing stochastic differential equations for an IPS when the number of particles per experiment $N$ is on the order of several thousand and the number of experiments $M$ is less than 100. This is in contrast to existing work showing that system identification for $N$ less than 100 and $M$ on the order of several thousand is feasible using strong-form methods. We prove that under some standard regularity assumptions the scheme converges with rate $\mathcal{O}(N^{-1/2})$ in the ordinary least squares setting and we demonstrate the convergence rate numerically on several systems in one and two spatial dimensions. Our examples include a canonical problem from homogenization theory (as a first step towards learning coarse-grained models), the dynamics of an attractive-repulsive swarm, and the IPS description of the parabolic-elliptic Keller-Segel model for chemotaxis.

We revisit the classical problem of nonparametric density estimation, but impose local differential privacy constraints. Under such constraints, the original multivariate data $X_1,\ldots,X_n \in \mathbb{R}^d$ cannot be directly observed, and all estimators are functions of the randomised output of a suitable privacy mechanism. The statistician is free to choose the form of the privacy mechanism, and in this work we propose to add Laplace distributed noise to a discretisation of the location of a vector $X_i$. Based on these randomised data, we design a novel estimator of the density function, which can be viewed as a privatised version of the well-studied histogram density estimator. Our theoretical results include universal pointwise consistency and strong universal $L_1$-consistency. In addition, a convergence rate over classes of Lipschitz functions is derived, which is complemented by a matching minimax lower bound. We illustrate the trade-off between data utility and privacy by means of a small simulation study.

The time domain linear sampling method (TD-LSM) solves inverse scattering problems using time domain data by creating an indicator function for the support of the unknown scatterer. It involves only solving a linear integral equation called the near-field equation using different data from sampling points that probe the domain where the scatterer is located. To date, the method has been used for the acoustic wave equation and has been tested for several different types of scatterers, i.e. sound hard, impedance, and penetrable, and for wave-guides. In this paper, we extend the TD-LSM to the time dependent Maxwell's system with impedance boundary conditions - a similar analysis handles the case of a perfectly electrically conducting (PEC) body. We provide an analysis that supports the use of the TD-LSM for this problem, and preliminary numerical tests of the algorithm. Our analysis relies on the Laplace transform approach previously used for the acoustic wave equation. This is the first application of the TD-LSM in electromagnetism.

We extend and analyze the energy-based discontinuous Galerkin method for second order wave equations on staggered and structured meshes. By combining spatial staggering with local time-stepping near boundaries, the method overcomes the typical numerical stiffness associated with high order piecewise polynomial approximations. In one space dimension with periodic boundary conditions and suitably chosen numerical fluxes, we prove bounds on the spatial operators that establish stability for CFL numbers $c \frac {\Delta t}{h} < C$ independent of order when stability-enhanced explicit time-stepping schemes of matching order are used. For problems on bounded domains and in higher dimensions we demonstrate numerically that one can march explicitly with large time steps at high order temporal and spatial accuracy.

In this paper, we study the statistical limits of deep learning techniques for solving elliptic partial differential equations (PDEs) from random samples using the Deep Ritz Method (DRM) and Physics-Informed Neural Networks (PINNs). To simplify the problem, we focus on a prototype elliptic PDE: the Schr\"odinger equation on a hypercube with zero Dirichlet boundary condition, which has wide application in the quantum-mechanical systems. We establish upper and lower bounds for both methods, which improves upon concurrently developed upper bounds for this problem via a fast rate generalization bound. We discover that the current Deep Ritz Methods is sub-optimal and propose a modified version of it. We also prove that PINN and the modified version of DRM can achieve minimax optimal bounds over Sobolev spaces. Empirically, following recent work which has shown that the deep model accuracy will improve with growing training sets according to a power law, we supply computational experiments to show a similar behavior of dimension dependent power law for deep PDE solvers.

This paper deals with the kernel-based approximation of a multivariate periodic function by interpolation at the points of an integration lattice -- a setting that, as pointed out by Zeng, Leung, Hickernell (MCQMC2004, 2006) and Zeng, Kritzer, Hickernell (Constr. Approx., 2009), allows fast evaluation by fast Fourier transform, so avoiding the need for a linear solver. The main contribution of the paper is the application to the approximation problem for uncertainty quantification of elliptic partial differential equations, with the diffusion coefficient given by a random field that is periodic in the stochastic variables, in the model proposed recently by Kaarnioja, Kuo, Sloan (SIAM J. Numer. Anal., 2020). The paper gives a full error analysis, and full details of the construction of lattices needed to ensure a good (but inevitably not optimal) rate of convergence and an error bound independent of dimension. Numerical experiments support the theory.

The abundance of data has given machine learning considerable momentum in natural sciences and engineering. However, the modeling of simulated physical processes remains difficult. A key problem is the correct handling of geometric boundaries. While triangularized geometric boundaries are very common in engineering applications, they are notoriously difficult to model by machine learning approaches due to their heterogeneity with respect to size and orientation. In this work, we introduce Boundary Graph Neural Networks (BGNNs), which dynamically modify graph structures to address boundary conditions. Boundary graph structures are constructed via modifying edges, augmenting node features, and dynamically inserting virtual nodes. The new BGNNs are tested on complex 3D granular flow processes of hoppers and rotating drums which are standard components of industrial machinery. Using precise simulations that are obtained by an expensive and complex discrete element method, BGNNs are evaluated in terms of computational efficiency as well as prediction accuracy of particle flows and mixing entropies. Even if complex boundaries are present, BGNNs are able to accurately reproduce 3D granular flows within simulation uncertainties over hundreds of thousands of simulation timesteps, and most notably particles completely stay within the geometric objects without using handcrafted conditions or restrictions.

We develop an algorithm that computes strongly continuous semigroups on infinite-dimensional Hilbert spaces with explicit error control. Given a generator $A$, a time $t>0$, an arbitrary initial vector $u_0$ and an error tolerance $\epsilon>0$, the algorithm computes $\exp(tA)u_0$ with error bounded by $\epsilon$. The algorithm is based on a combination of a regularized functional calculus, suitable contour quadrature rules, and the adaptive computation of resolvents in infinite dimensions. As a particular case, we show that it is possible, even when only allowing pointwise evaluation of coefficients, to compute, with error control, semigroups on the unbounded domain $L^2(\mathbb{R}^d)$ that are generated by partial differential operators with polynomially bounded coefficients of locally bounded total variation. For analytic semigroups (and more general Laplace transform inversion), we provide a quadrature rule whose error decreases like $\exp(-cN/\log(N))$ for $N$ quadrature points, that remains stable as $N\rightarrow\infty$, and which is also suitable for infinite-dimensional operators. Numerical examples are given, including: Schr\"odinger and wave equations on the aperiodic Ammann--Beenker tiling, complex perturbed fractional diffusion equations on $L^2(\mathbb{R})$, and damped Euler--Bernoulli beam equations.

We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.

北京阿比特科技有限公司