In the paper, an approach for the numerical solution of stationary nonlinear Navier-Stokes equations in rotation and convective forms in a polygonal domain containing one reentrant corner on its boundary, that is, a corner greater than ${\pi}$ is considered. The method allows us to obtain the 1st order of convergence of the approximate solution to the exact one with respect to the grid step h, regardless of the reentrant corner value.
As opposed to an overwhelming number of works on strong approximations, weak approximations of stochastic differential equations (SDEs), sometimes more relevant in applications, are less studied in the literature. Most of the weak error analysis among them relies on a fundamental weak approximation theorem originally proposed by Milstein in 1986, which requires the coefficients of SDEs to be globally Lipschitz continuous. However, SDEs from applications rarely obey such a restrictive condition and the study of weak approximations in a non-globally Lipschitz setting turns out to be a challenging problem. This paper aims to carry out the weak error analysis of discrete-time approximations for SDEs with non-globally Lipschitz coefficients. Under certain board assumptions on the analytical and numerical solutions of SDEs, a general weak convergence theorem is formulated for one-step numerical approximations of SDEs. Explicit conditions on coefficients of SDEs are also offered to guarantee the aforementioned board assumptions, which allows coefficients to grow super-linearly. As applications of the obtained weak convergence theorems, we prove the expected weak convergence rate of two well-known types of schemes such as the tamed Euler method and the backward Euler method, in the non-globally Lipschitz setting. Numerical examples are finally provided to confirm the previous findings.
A high-order finite element method is proposed to solve the nonlinear convection-diffusion equation on a time-varying domain whose boundary is implicitly driven by the solution of the equation. The method is semi-implicit in the sense that the boundary is traced explicitly with a high-order surface-tracking algorithm, while the convection-diffusion equation is solved implicitly with high-order backward differentiation formulas and fictitious-domain finite element methods. By two numerical experiments for severely deforming domains, we show that optimal convergence orders are obtained in energy norm for third-order and fourth-order methods.
In this paper, we study deep neural networks (DNNs) for solving high-dimensional evolution equations with oscillatory solutions. Different from deep least-squares methods that deal with time and space variables simultaneously, we propose a deep adaptive basis Galerkin (DABG) method which employs the spectral-Galerkin method for time variable by tensor-product basis for oscillatory solutions and the deep neural network method for high-dimensional space variables. The proposed method can lead to a linear system of differential equations having unknown DNNs that can be trained via the loss function. We establish a posterior estimates of the solution error which is bounded by the minimal loss function and the term $O(N^{-m})$, where $N$ is the number of basis functions and $m$ characterizes the regularity of the equation, and show that if the true solution is a Barron-type function, the error bound converges to zero as $M=O(N^p)$ approaches to infinity where $M$ is the width of the used networks and $p$ is a positive constant. Numerical examples including high-dimensional linear parabolic and hyperbolic equations, and nonlinear Allen-Cahn equation are presented to demonstrate the performance of the proposed DABG method is better than that of existing DNNs.
This paper deals with a special type of Lyapunov functions, namely the solution of Zubov's equation. Such a function can be used to characterize the domain of attraction for systems of ordinary differential equations. We derive and prove an integral form solution to Zubov's equation. For numerical computation, we develop two data-driven methods. One is based on the integration of an augmented system of differential equations; and the other one is based on deep learning. The former is effective for systems with a relatively low state space dimension and the latter is developed for high dimensional problems. The deep learning method is applied to a New England 10-generator power system model. We prove that a neural network approximation exists for the Lyapunov function of power systems such that the approximation error is a cubic polynomial of the number of generators. The error convergence rate as a function of n, the number of neurons, is proved.
Analysis and use of stochastic models represented by a discrete-time Markov Chain require evaluation of performance measures and characterization of its stationary distribution. Analytical solutions are often unavailable when the system states are continuous or mixed. This paper presents a new method for computing the stationary distribution and performance measures for stochastic systems represented by continuous-, or mixed-state Markov chains. We show the asymptotic convergence and provide deterministic non-asymptotic error bounds for our method under the supremum norm. Our finite approximation method is near-optimal among all discrete approximate distributions, including empirical distributions obtained from Markov chain Monte Carlo (MCMC). Numerical experiments validate the accuracy and efficiency of our method and show that it significantly outperforms MCMC based approach.
In this paper, we are interested to an inverse Cauchy problem governed by the Stokes equation, called the data completion problem. It consists in determining the unspecified fluid velocity, or one of its components over a part of its boundary, by introducing given measurements on its remaining part. As it's known, this problem is one of the highly ill-posed problems in the Hadamard's sense \cite{had}, it is then an interesting challenge to carry out a numerical procedure for approximating their solutions, mostly in the particular case of noisy data. To solve this problem, we propose here a regularizing approach based on a coupled complex boundary method, originally proposed in \cite{source}, for solving an inverse source problem. We show the existence of the regularization optimization problem and prove the convergence of the subsequence of optimal solutions of Tikhonov regularization formulations to the solution of the Cauchy problem. Then we suggest the numerical approximation of this problem using the adjoint gradient technic and the finite element method of $P1-bubble/P1$ type. Finally, we provide some numerical results showing the accuracy, effectiveness, and robustness of the proposed approach.
In this paper, we propose a a machine learning approach via model-operator-data network (MOD-Net) for solving PDEs. A MOD-Net is driven by a model to solve PDEs based on operator representation with regularization from data. For linear PDEs, we use a DNN to parameterize the Green's function and obtain the neural operator to approximate the solution according to the Green's method. To train the DNN, the empirical risk consists of the mean squared loss with the least square formulation or the variational formulation of the governing equation and boundary conditions. For complicated problems, the empirical risk also includes a few labels, which are computed on coarse grid points with cheap computation cost and significantly improves the model accuracy. Intuitively, the labeled dataset works as a regularization in addition to the model constraints. The MOD-Net solves a family of PDEs rather than a specific one and is much more efficient than original neural operator because few expensive labels are required. We numerically show MOD-Net is very efficient in solving Poisson equation and one-dimensional radiative transfer equation. For nonlinear PDEs, the nonlinear MOD-Net can be similarly used as an ansatz for solving nonlinear PDEs, exemplified by solving several nonlinear PDE problems, such as the Burgers equation.
The deep-learning-based least squares method has shown successful results in solving high-dimensional non-linear partial differential equations (PDEs). However, this method usually converges slowly. To speed up the convergence of this approach, an active-learning-based sampling algorithm is proposed in this paper. This algorithm actively chooses the most informative training samples from a probability density function based on residual errors to facilitate error reduction. In particular, points with larger residual errors will have more chances of being selected for training. This algorithm imitates the human learning process: learners are likely to spend more time repeatedly studying mistakes than other tasks they have correctly finished. A series of numerical results are illustrated to demonstrate the effectiveness of our active-learning-based sampling in high dimensions to speed up the convergence of the deep-learning-based least squares method.
This paper focuses on the regularization of backward time-fractional diffusion problem on unbounded domain. This problem is well-known to be ill-posed, whence the need of a regularization method in order to recover stable approximate solution. For the problem under consideration, we present a unified framework of regularization which covers some techniques such as Fourier regularization [19], mollification [12] and approximate-inverse [7]. We investigate a regularization technique with two major advantages: the simplicity of computation of the regularized solution and the avoid of truncation of high frequency components (so as to avoid undesirable oscillation on the resulting approximate-solution). Under classical Sobolev-smoothness conditions, we derive order-optimal error estimates between the approximate solution and the exact solution in the case where both the data and the model are only approximately known. In addition, an order-optimal a-posteriori parameter choice rule based on the Morozov principle is given. Finally, via some numerical experiments in two-dimensional space, we illustrate the efficiency of our regularization approach and we numerically confirm the theoretical convergence rates established in the paper.
We present an end-to-end framework for solving the Vehicle Routing Problem (VRP) using reinforcement learning. In this approach, we train a single model that finds near-optimal solutions for problem instances sampled from a given distribution, only by observing the reward signals and following feasibility rules. Our model represents a parameterized stochastic policy, and by applying a policy gradient algorithm to optimize its parameters, the trained model produces the solution as a sequence of consecutive actions in real time, without the need to re-train for every new problem instance. On capacitated VRP, our approach outperforms classical heuristics and Google's OR-Tools on medium-sized instances in solution quality with comparable computation time (after training). We demonstrate how our approach can handle problems with split delivery and explore the effect of such deliveries on the solution quality. Our proposed framework can be applied to other variants of the VRP such as the stochastic VRP, and has the potential to be applied more generally to combinatorial optimization problems.