亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Solving PDEs with machine learning techniques has become a popular alternative to conventional methods. In this context, Neural networks (NNs) are among the most commonly used machine learning tools, and in those models, the choice of an appropriate loss function is critical. In general, the main goal is to guarantee that minimizing the loss during training translates to minimizing the error in the solution at the same rate. In this work, we focus on the time-harmonic Maxwell's equations, whose weak formulation takes H(curl) as the space of test functions. We propose a NN in which the loss function is a computable approximation of the dual norm of the weak-form PDE residual. To that end, we employ the Helmholtz decomposition of the space H(curl) and construct an orthonormal basis for this space in two and three spatial dimensions. Here, we use the Discrete Sine/Cosine Transform to accurately and efficiently compute the discrete version of our proposed loss function. Moreover, in the numerical examples we show a high correlation between the proposed loss function and the H(curl)-norm of the error, even in problems with low-regularity solutions.

相關內容

損失函數,在AI中亦稱呼距離函數,度量函數。此處的距離代表的是抽象性的,代表真實數據與預測數據之間的誤差。損失函數(loss function)是用來估量你模型的預測值f(x)與真實值Y的不一致程度,它是一個非負實值函數,通常使用L(Y, f(x))來表示,損失函數越小,模型的魯棒性就越好。損失函數是經驗風險函數的核心部分,也是結構風險函數重要組成部分。

Conductivity reconstruction in an inverse eddy current problem is considered in the present paper. With the electric field measurement on part of domain boundary, we formulate the reconstruction problem to a constrained optimization problem with total variation regularization. Existence and stability are proved for the solution to the optimization problem. The finite element method is employed to discretize the optimization problem. The gradient Lipschitz properties of the objective functional are established for the the discrete optimization problems. We propose the alternating direction method of multipliers to solve the discrete problem. Based on the the gradient Lipschitz property, we prove the convergence by extending the admissible set to the whole finite element space. Finally, we show some numerical experiments to illustrate the efficiency of the proposed methods.

We consider the problem of inference for projection parameters in linear regression with increasing dimensions. This problem has been studied under a variety of assumptions in the literature. The classical asymptotic normality result for the least squares estimator of the projection parameter only holds when the dimension $d$ of the covariates is of smaller order than $n^{1/2}$, where $n$ is the sample size. Traditional sandwich estimator-based Wald intervals are asymptotically valid in this regime. In this work, we propose a bias correction for the least squares estimator and prove the asymptotic normality of the resulting debiased estimator as long as $d = o(n^{2/3})$, with an explicit bound on the rate of convergence to normality. We leverage recent methods of statistical inference that do not require an estimator of the variance to perform asymptotically valid statistical inference. We provide a discussion of how our techniques can be generalized to increase the allowable range of $d$ even further.

Profile likelihoods are rarely used in geostatistical models due to the computational burden imposed by repeated decompositions of large variance matrices. Accounting for uncertainty in covariance parameters can be highly consequential in geostatistical models as some covariance parameters are poorly identified, the problem is severe enough that the differentiability parameter of the Matern correlation function is typically treated as fixed. The problem is compounded with anisotropic spatial models as there are two additional parameters to consider. In this paper, we make the following contributions: 1, A methodology is created for profile likelihoods for Gaussian spatial models with Mat\'ern family of correlation functions, including anisotropic models. This methodology adopts a novel reparametrization for generation of representative points, and uses GPUs for parallel profile likelihoods computation in software implementation. 2, We show the profile likelihood of the Mat\'ern shape parameter is often quite flat but still identifiable, it can usually rule out very small values. 3, Simulation studies and applications on real data examples show that profile-based confidence intervals of covariance parameters and regression parameters have superior coverage to the traditional standard Wald type confidence intervals.

We study lower bounds for the problem of approximating a one dimensional distribution given (noisy) measurements of its moments. We show that there are distributions on $[-1,1]$ that cannot be approximated to accuracy $\epsilon$ in Wasserstein-1 distance even if we know \emph{all} of their moments to multiplicative accuracy $(1\pm2^{-\Omega(1/\epsilon)})$; this result matches an upper bound of Kong and Valiant [Annals of Statistics, 2017]. To obtain our result, we provide a hard instance involving distributions induced by the eigenvalue spectra of carefully constructed graph adjacency matrices. Efficiently approximating such spectra in Wasserstein-1 distance is a well-studied algorithmic problem, and a recent result of Cohen-Steiner et al. [KDD 2018] gives a method based on accurately approximating spectral moments using $2^{O(1/\epsilon)}$ random walks initiated at uniformly random nodes in the graph. As a strengthening of our main result, we show that improving the dependence on $1/\epsilon$ in this result would require a new algorithmic approach. Specifically, no algorithm can compute an $\epsilon$-accurate approximation to the spectrum of a normalized graph adjacency matrix with constant probability, even when given the transcript of $2^{\Omega(1/\epsilon)}$ random walks of length $2^{\Omega(1/\epsilon)}$ started at random nodes.

We propose an accurate and energy-stable parametric finite element method for solving the sharp-interface continuum model of solid-state dewetting in three-dimensional space. The model describes the motion of the film\slash vapor interface with contact line migration and is governed by the surface diffusion equation with proper boundary conditions at the contact line. We present a new weak formulation for the problem, in which the interface and its contact line are evolved simultaneously. By using piecewise linear elements in space and backward Euler in time, we then discretize the weak formulation to obtain a fully discretized parametric finite element approximation. The resulting numerical method is shown to be well-posed and unconditionally energy-stable. Furthermore, the numerical method is extended for solving the sharp interface model of solid-state dewetting with anisotropic surface energies in the Riemmanian metric form. Numerical results are reported to show the convergence and efficiency of the proposed numerical method as well as the anisotropic effects on the morphological evolution of thin films in solid-state dewetting.

To integrate large systems of nonlinear differential equations in time, we consider a variant of nonlinear waveform relaxation (also known as dynamic iteration or Picard-Lindel\"of iteration), where at each iteration a linear inhomogeneous system of differential equations has to be solved. This is done by the exponential block Krylov subspace (EBK) method. Thus, we have an inner-outer iterative method, where iterative approximations are determined over a certain time interval, with no time stepping involved. This approach has recently been shown to be efficient as a time-parallel integrator within the PARAEXP framework. In this paper, convergence behavior of this method is assessed theoretically and practically. We examine efficiency of the method by testing it on nonlinear Burgers and Liouville-Bratu-Gelfand equations and comparing its performance with that of conventional time-stepping integrators.

Recently, many studies have shed light on the high adaptivity of deep neural network methods in nonparametric regression models, and their superior performance has been established for various function classes. Motivated by this development, we study a deep neural network method to estimate the drift coefficient of a multi-dimensional diffusion process from discrete observations. We derive generalization error bounds for least squares estimates based on deep neural networks and show that they achieve the minimax rate of convergence up to a logarithmic factor when the drift function has a compositional structure.

This article proposes an efficient numerical method for solving nonlinear partial differential equations (PDEs) based on sparse Gaussian processes (SGPs). Gaussian processes (GPs) have been extensively studied for solving PDEs by formulating the problem of finding a reproducing kernel Hilbert space (RKHS) to approximate a PDE solution. The approximated solution lies in the span of base functions generated by evaluating derivatives of different orders of kernels at sample points. However, the RKHS specified by GPs can result in an expensive computational burden due to the cubic computation order of the matrix inverse. Therefore, we conjecture that a solution exists on a ``condensed" subspace that can achieve similar approximation performance, and we propose a SGP-based method to reformulate the optimization problem in the ``condensed" subspace. This significantly reduces the computation burden while retaining desirable accuracy. The paper rigorously formulates this problem and provides error analysis and numerical experiments to demonstrate the effectiveness of this method. The numerical experiments show that the SGP method uses fewer than half the uniform samples as inducing points and achieves comparable accuracy to the GP method using the same number of uniform samples, resulting in a significant reduction in computational cost. Our contributions include formulating the nonlinear PDE problem as an optimization problem on a ``condensed" subspace of RKHS using SGP, as well as providing an existence proof and rigorous error analysis. Furthermore, our method can be viewed as an extension of the GP method to account for general positive semi-definite kernels.

The Sparse Identification of Nonlinear Dynamics (SINDy) algorithm can be applied to stochastic differential equations to estimate the drift and the diffusion function using data from a realization of the SDE. The SINDy algorithm requires sample data from each of these functions, which is typically estimated numerically from the data of the state. We analyze the performance of the previously proposed estimates for the drift and diffusion function to give bounds on the error for finite data. However, since this algorithm only converges as both the sampling frequency and the length of trajectory go to infinity, obtaining approximations within a certain tolerance may be infeasible. To combat this, we develop estimates with higher orders of accuracy for use in the SINDy framework. For a given sampling frequency, these estimates give more accurate approximations of the drift and diffusion functions, making SINDy a far more feasible system identification method.

In this paper, practically computable low-order approximations of potentially high-dimensional differential equations driven by geometric rough paths are proposed and investigated. In particular, equations are studied that cover the linear setting, but we allow for a certain type of dissipative nonlinearity in the drift as well. In a first step, a linear subspace is found that contains the solution space of the underlying rough differential equation (RDE). This subspace is associated to covariances of linear Ito-stochastic differential equations which is shown exploiting a Gronwall lemma for matrix differential equations. Orthogonal projections onto the identified subspace lead to a first exact reduced order system. Secondly, a linear map of the RDE solution (quantity of interest) is analyzed in terms of redundant information meaning that state variables are found that do not contribute to the quantity of interest. Once more, a link to Ito-stochastic differential equations is used. Removing such unnecessary information from the RDE provides a further dimension reduction without causing an error. Finally, we discretize a linear parabolic rough partial differential equation in space. The resulting large-order RDE is subsequently tackled with the exact reduction techniques studied in this paper. We illustrate the enormous complexity reduction potential in the corresponding numerical experiments.

北京阿比特科技有限公司