亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Interface problems depict many fundamental physical phenomena and widely apply in the engineering. However, it is challenging to develop efficient fully decoupled numerical methods for solving degenerate interface problems in which the coefficient of a PDE is discontinuous and greater than or equal to zero on the interface. The main motivation in this paper is to construct fully decoupled numerical methods for solving nonlinear degenerate interface problems with ``double singularities". An efficient fully decoupled numerical method is proposed for nonlinear degenerate interface problems. The scheme combines deep neural network on the singular subdomain with finite difference method on the regular subdomain. The key of the new approach is to split nonlinear degenerate partial differential equation with interface into two independent boundary value problems based on deep learning. The outstanding advantages of the proposed schemes are that not only the convergence order of the degenerate interface problems on whole domain is determined by the finite difference scheme on the regular subdomain, but also can calculate $\mathbf{VERY}$ $\mathbf{BIG}$ jump ratio(such as $10^{12}:1$ or $1:10^{12}$) for the interface problems including degenerate and non-degenerate cases. The expansion of the solutions does not contains any undetermined parameters in the numerical method. In this way, two independent nonlinear systems are constructed in other subdomains and can be computed in parallel. The flexibility, accuracy and efficiency of the methods are validated from various experiments in both 1D and 2D. Specially, not only our method is suitable for solving degenerate interface case, but also for non-degenerate interface case. Some application examples with complicated multi-connected and sharp edge interface examples including degenerate and nondegenerate cases are also presented.

相關內容

Recent studies on face forgery detection have shown satisfactory performance for methods involved in training datasets, but are not ideal enough for unknown domains. This motivates many works to improve the generalization, but forgery-irrelevant information, such as image background and identity, still exists in different domain features and causes unexpected clustering, limiting the generalization. In this paper, we propose a controllable guide-space (GS) method to enhance the discrimination of different forgery domains, so as to increase the forgery relevance of features and thereby improve the generalization. The well-designed guide-space can simultaneously achieve both the proper separation of forgery domains and the large distance between real-forgery domains in an explicit and controllable manner. Moreover, for better discrimination, we use a decoupling module to weaken the interference of forgery-irrelevant correlations between domains. Furthermore, we make adjustments to the decision boundary manifold according to the clustering degree of the same domain features within the neighborhood. Extensive experiments in multiple in-domain and cross-domain settings confirm that our method can achieve state-of-the-art generalization.

We consider estimation of parameters defined as linear functionals of solutions to linear inverse problems. Any such parameter admits a doubly robust representation that depends on the solution to a dual linear inverse problem, where the dual solution can be thought as a generalization of the inverse propensity function. We provide the first source condition double robust inference method that ensures asymptotic normality around the parameter of interest as long as either the primal or the dual inverse problem is sufficiently well-posed, without knowledge of which inverse problem is the more well-posed one. Our result is enabled by novel guarantees for iterated Tikhonov regularized adversarial estimators for linear inverse problems, over general hypothesis spaces, which are developments of independent interest.

We consider a general class of nonsmooth optimal control problems with partial differential equation (PDE) constraints, which are very challenging due to its nonsmooth objective functionals and the resulting high-dimensional and ill-conditioned systems after discretization. We focus on the application of a primal-dual method, with which different types of variables can be treated individually and thus its main computation at each iteration only requires solving two PDEs. Our target is to accelerate the primal-dual method with either larger step sizes or operator learning techniques. For the accelerated primal-dual method with larger step sizes, its convergence can be still proved rigorously while it numerically accelerates the original primal-dual method in a simple and universal way. For the operator learning acceleration, we construct deep neural network surrogate models for the involved PDEs. Once a neural operator is learned, solving a PDE requires only a forward pass of the neural network, and the computational cost is thus substantially reduced. The accelerated primal-dual method with operator learning is mesh-free, numerically efficient, and scalable to different types of PDEs. The acceleration effectiveness of these two techniques is promisingly validated by some preliminary numerical results.

A technique is described in this paper to avoid order reduction when integrating reaction-diffusion initial boundary value problems with explicit exponential Rosenbrock methods. The technique is valid for any Rosenbrock method, without having to impose any stiff order conditions, and for general time-dependent boundary values. An analysis on the global error is thoroughly performed and some numerical experiments are shown which corroborate the theoretical results, and in which a big gain in efficiency with respect to applying the standard method of lines can be observed.

In this work, we propose and computationally investigate a monolithic space-time multirate scheme for coupled problems. The novelty lies in the monolithic formulation of the multirate approach as this requires a careful design of the functional framework, corresponding discretization, and implementation. Our method of choice is a tensor-product Galerkin space-time discretization. The developments are carried out for both prototype interface- and volume coupled problems such as coupled wave-heat-problems and a displacement equation coupled to Darcy flow in a poro-elastic medium. The latter is applied to the well-known Mandel's benchmark. Detailed computational investigations and convergence analyses give evidence that our monolithic multirate framework performs well.

We present a meshless Schwarz-type non-overlapping domain decomposition method based on artificial neural networks for solving forward and inverse problems involving partial differential equations (PDEs). To ensure the consistency of solutions across neighboring subdomains, we adopt a generalized Robin-type interface condition, assigning unique Robin parameters to each subdomain. These subdomain-specific Robin parameters are learned to minimize the mismatch on the Robin interface condition, facilitating efficient information exchange during training. Our method is applicable to both the Laplace's and Helmholtz equations. It represents local solutions by an independent neural network model which is trained to minimize the loss on the governing PDE while strictly enforcing boundary and interface conditions through an augmented Lagrangian formalism. A key strength of our method lies in its ability to learn a Robin parameter for each subdomain, thereby enhancing information exchange with its neighboring subdomains. We observe that the learned Robin parameters adapt to the local behavior of the solution, domain partitioning and subdomain location relative to the overall domain. Extensive experiments on forward and inverse problems, including one-way and two-way decompositions with crosspoints, demonstrate the versatility and performance of our proposed approach.

This paper considers the Cauchy problem for the nonlinear dynamic string equation of Kirchhoff-type with time-varying coefficients. The objective of this work is to develop a temporal discretization algorithm capable of approximating a solution to this initial-boundary value problem. To this end, a symmetric three-layer semi-discrete scheme is employed with respect to the temporal variable, wherein the value of a nonlinear term is evaluated at the middle node point. This approach enables the numerical solutions per temporal step to be obtained by inverting the linear operators, yielding a system of second-order linear ordinary differential equations. Local convergence of the proposed scheme is established, and it achieves quadratic convergence concerning the step size of the discretization of time on the local temporal interval. We have conducted several numerical experiments using the proposed algorithm for various test problems to validate its performance. It can be said that the obtained numerical results are in accordance with the theoretical findings.

Understanding dynamics in complex systems is challenging because there are many degrees of freedom, and those that are most important for describing events of interest are often not obvious. The leading eigenfunctions of the transition operator are useful for visualization, and they can provide an efficient basis for computing statistics such as the likelihood and average time of events (predictions). Here we develop inexact iterative linear algebra methods for computing these eigenfunctions (spectral estimation) and making predictions from a data set of short trajectories sampled at finite intervals. We demonstrate the methods on a low-dimensional model that facilitates visualization and a high-dimensional model of a biomolecular system. Implications for the prediction problem in reinforcement learning are discussed.

High-dimensional data arises in numerous applications, and the rapidly developing field of geometric deep learning seeks to develop neural network architectures to analyze such data in non-Euclidean domains, such as graphs and manifolds. Recent work by Z. Wang, L. Ruiz, and A. Ribeiro has introduced a method for constructing manifold neural networks using the spectral decomposition of the Laplace Beltrami operator. Moreover, in this work, the authors provide a numerical scheme for implementing such neural networks when the manifold is unknown and one only has access to finitely many sample points. The authors show that this scheme, which relies upon building a data-driven graph, converges to the continuum limit as the number of sample points tends to infinity. Here, we build upon this result by establishing a rate of convergence that depends on the intrinsic dimension of the manifold but is independent of the ambient dimension. We also discuss how the rate of convergence depends on the depth of the network and the number of filters used in each layer.

Accurate error estimation is crucial in model order reduction, both to obtain small reduced-order models and to certify their accuracy when deployed in downstream applications such as digital twins. In existing a posteriori error estimation approaches, knowledge about the time integration scheme is mandatory, e.g., the residual-based error estimators proposed for the reduced basis method. This poses a challenge when automatic ordinary differential equation solver libraries are used to perform the time integration. To address this, we present a data-enhanced approach for a posteriori error estimation. Our new formulation enables residual-based error estimators to be independent of any time integration method. To achieve this, we introduce a corrected reduced-order model which takes into account a data-driven closure term for improved accuracy. The closure term, subject to mild assumptions, is related to the local truncation error of the corresponding time integration scheme. We propose efficient computational schemes for approximating the closure term, at the cost of a modest amount of training data. Furthermore, the new error estimator is incorporated within a greedy process to obtain parametric reduced-order models. Numerical results on three different systems show the accuracy of the proposed error estimation approach and its ability to produce ROMs that generalize well.

北京阿比特科技有限公司