亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

It is well known that Newton's method, especially when applied to large problems such as the discretization of nonlinear partial differential equations (PDEs), can have trouble converging if the initial guess is too far from the solution. This work focuses on accelerating this convergence, in the context of the discretization of nonlinear elliptic PDEs. We first provide a quick review of existing methods, and justify our choice of learning an initial guess with a Fourier neural operator (FNO). This choice was motivated by the mesh-independence of such operators, whose training and evaluation can be performed on grids with different resolutions. The FNO is trained using a loss minimization over generated data, loss functions based on the PDE discretization. Numerical results, in one and two dimensions, show that the proposed initial guess accelerates the convergence of Newton's method by a large margin compared to a naive initial guess, especially for highly nonlinear or anisotropic problems.

相關內容

This article introduces a new numerical method for the minimization under constraints of a discrete energy modeling multicomponents rotating Bose-Einstein condensates in the regime of strong confinement and with rotation. Moreover, we consider both segregation and coexistence regimes between the components. The method includes a discretization of a continuous energy in space dimension 2 and a gradient algorithm with adaptive time step and projection for the minimization. It is well known that, depending on the regime, the minimizers may display different structures, sometimes with vorticity (from singly quantized vortices, to vortex sheets and giant holes). In order to study numerically the structures of the minimizers, we introduce in this paper a numerical algorithm for the computation of the indices of the vortices, as well as an algorithm for the computation of the indices of vortex sheets. Several computations are carried out, to illustrate the efficiency of the method, to cover different physical cases, to validate recent theoretical results as well as to support conjectures. Moreover, we compare this method with an alternative method from the literature.

Riemannian optimization is concerned with problems, where the independent variable lies on a smooth manifold. There is a number of problems from numerical linear algebra that fall into this category, where the manifold is usually specified by special matrix structures, such as orthogonality or definiteness. Following this line of research, we investigate tools for Riemannian optimization on the symplectic Stiefel manifold. We complement the existing set of numerical optimization algorithms with a Riemannian trust region method tailored to the symplectic Stiefel manifold. To this end, we derive a matrix formula for the Riemannian Hessian under a right-invariant metric. Moreover, we propose a novel retraction for approximating the Riemannian geodesics. Finally, we conduct a comparative study in which we juxtapose the performance of the Riemannian variants of the steepest descent, conjugate gradients, and trust region methods on selected matrix optimization problems that feature symplectic constraints.

The iterated Arnoldi-Tikhonov (iAT) method is a regularization technique particularly suited for solving large-scale ill-posed linear inverse problems. Indeed, it reduces the computational complexity through the projection of the discretized problem into a lower-dimensional Krylov subspace, where the problem is then solved. This paper studies iAT under an additional hypothesis on the discretized operator. It presents a theoretical analysis of the approximation errors, leading to an a posteriori rule for choosing the regularization parameter. Our proposed rule results in more accurate computed approximate solutions compared to the a posteriori rule recently proposed in arXiv:2311.11823. The numerical results confirm the theoretical analysis, providing accurate computed solutions even when the new assumption is not satisfied.

The method of fundamental solutions (MFS), also known as the method of auxiliary sources (MAS), is a well-known computational method for the solution of boundary-value problems. The final solution ("MAS solution") is obtained once we have found the amplitudes of $N$ auxiliary "MAS sources." Past studies have demonstrated that it is possible for the MAS solution to converge to the true solution even when the $N$ auxiliary sources diverge and oscillate. The present paper extends the past studies by demonstrating this possibility within the context of Laplace's equation with Neumann boundary conditions. One can thus obtain the correct solution from sources that, when $N$ is large, must be considered unphysical. We carefully explain the underlying reasons for the unphysical results, distinguish from other difficulties that might concurrently arise, and point to significant differences with time-dependent problems that were studied in the past.

Sparse recovery principles play an important role in solving many nonlinear ill-posed inverse problems. We investigate a variational framework with support Oracle for compressed sensing sparse reconstructions, where the available measurements are nonlinear and possibly corrupted by noise. A graph neural network, named Oracle-Net, is proposed to predict the support from the nonlinear measurements and is integrated into a regularized recovery model to enforce sparsity. The derived nonsmooth optimization problem is then efficiently solved through a constrained proximal gradient method. Error bounds on the approximate solution of the proposed Oracle-based optimization are provided in the context of the ill-posed Electrical Impedance Tomography problem. Numerical solutions of the EIT nonlinear inverse reconstruction problem confirm the potential of the proposed method which improves the reconstruction quality from undersampled measurements, under sparsity assumptions.

The implication problem for conditional independence (CI) asks whether the fact that a probability distribution obeys a given finite set of CI relations implies that a further CI statement also holds in this distribution. This problem has a long and fascinating history, cumulating in positive results about implications now known as the semigraphoid axioms as well as impossibility results about a general finite characterization of CI implications. Motivated by violation of faithfulness assumptions in causal discovery, we study the implication problem in the special setting where the CI relations are obtained from a directed acyclic graphical (DAG) model along with one additional CI statement. Focusing on the Gaussian case, we give a complete characterization of when such an implication is graphical by using algebraic techniques. Moreover, prompted by the relevance of strong faithfulness in statistical guarantees for causal discovery algorithms, we give a graphical solution for an approximate CI implication problem, in which we ask whether small values of one additional partial correlation entail small values for yet a further partial correlation.

A so-called grid-overlay finite difference method (GoFD) was proposed recently for the numerical solution of homogeneous Dirichlet boundary value problems of the fractional Laplacian on arbitrary bounded domains. It was shown to have advantages of both finite difference and finite element methods, including its efficient implementation through the fast Fourier transform and ability to work for complex domains and with mesh adaptation. The purpose of this work is to study GoFD in a meshfree setting, a key to which is to construct the data transfer matrix from a given point cloud to a uniform grid. Two approaches are proposed, one based on the moving least squares fitting and the other based on the Delaunay triangulation and piecewise linear interpolation. Numerical results obtained for examples with convex and concave domains and various types of point clouds are presented. They show that both approaches lead to comparable results. Moreover, the resulting meshfree GoFD converges at a similar order as GoFD with unstructured meshes and finite element approximation as the number of points in the cloud increases. Furthermore, numerical results show that the method is robust to random perturbations in the location of the points.

Iterative parallel-in-time algorithms like Parareal can extend scaling beyond the saturation of purely spatial parallelization when solving initial value problems. However, they require the user to build coarse models to handle the inevitably serial transport of information in time.This is a time consuming and difficult process since there is still only limited theoretical insight into what constitutes a good and efficient coarse model. Novel approaches from machine learning to solve differential equations could provide a more generic way to find coarse level models for parallel-in-time algorithms. This paper demonstrates that a physics-informed Fourier Neural Operator (PINO) is an effective coarse model for the parallelization in time of the two-asset Black-Scholes equation using Parareal. We demonstrate that PINO-Parareal converges as fast as a bespoke numerical coarse model and that, in combination with spatial parallelization by domain decomposition, it provides better overall speedup than both purely spatial parallelization and space-time parallelizaton with a numerical coarse propagator.

We prove the full equivalence between Assembly Theory (AT) and Shannon Entropy via a method based upon the principles of statistical compression renamed `assembly index' that belongs to the LZ family of popular compression algorithms (ZIP, GZIP, JPEG). Such popular algorithms have been shown to empirically reproduce the results of AT, results that have also been reported before in successful applications to separating organic from non-organic molecules and in the context of the study of selection and evolution. We show that the assembly index value is equivalent to the size of a minimal context-free grammar. The statistical compressibility of such a method is bounded by Shannon Entropy and other equivalent traditional LZ compression schemes, such as LZ77, LZ78, or LZW. In addition, we demonstrate that AT, and the algorithms supporting its pathway complexity, assembly index, and assembly number, define compression schemes and methods that are subsumed into the theory of algorithmic (Kolmogorov-Solomonoff-Chaitin) complexity. Due to AT's current lack of logical consistency in defining causality for non-stochastic processes and the lack of empirical evidence that it outperforms other complexity measures found in the literature capable of explaining the same phenomena, we conclude that the assembly index and the assembly number do not lead to an explanation or quantification of biases in generative (physical or biological) processes, including those brought about by (abiotic or Darwinian) selection and evolution, that could not have been arrived at using Shannon Entropy or that have not been reported before using classical information theory or algorithmic complexity.

Quantifying spatial and/or temporal associations in multivariate geolocated data of different types is achievable via spatial random effects in a Bayesian hierarchical model, but severe computational bottlenecks arise when spatial dependence is encoded as a latent Gaussian process (GP) in the increasingly common large scale data settings on which we focus. The scenario worsens in non-Gaussian models because the reduced analytical tractability leads to additional hurdles to computational efficiency. In this article, we introduce Bayesian models of spatially referenced data in which the likelihood or the latent process (or both) are not Gaussian. First, we exploit the advantages of spatial processes built via directed acyclic graphs, in which case the spatial nodes enter the Bayesian hierarchy and lead to posterior sampling via routine Markov chain Monte Carlo (MCMC) methods. Second, motivated by the possible inefficiencies of popular gradient-based sampling approaches in the multivariate contexts on which we focus, we introduce the simplified manifold preconditioner adaptation (SiMPA) algorithm which uses second order information about the target but avoids expensive matrix operations. We demostrate the performance and efficiency improvements of our methods relative to alternatives in extensive synthetic and real world remote sensing and community ecology applications with large scale data at up to hundreds of thousands of spatial locations and up to tens of outcomes. Software for the proposed methods is part of R package 'meshed', available on CRAN.

北京阿比特科技有限公司