亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a simple algorithm to approximate the viscosity solution of Hamilton-Jacobi~(HJ) equations by means of an artificial deep neural network. The algorithm uses a stochastic gradient descent-based algorithm to minimize the least square principle defined by a monotone, consistent numerical scheme. We analyze the least square principle's critical points and derive conditions that guarantee that any critical point approximates the sought viscosity solution. The use of a deep artificial neural network on a finite difference scheme lifts the restriction of conventional finite difference methods that rely on computing functions on a fixed grid. This feature makes it possible to solve HJ equations posed in higher dimensions where conventional methods are infeasible. We demonstrate the efficacy of our algorithm through numerical studies on various canonical HJ equations across different dimensions, showcasing its potential and versatility.

相關內容

A proof of optimal-order error estimates is given for the full discretization of the Cahn--Hilliard equation with Cahn--Hilliard-type dynamic boundary conditions in a smooth domain. The numerical method combines a linear bulk--surface finite element discretization in space and linearly implicit backward difference formulae of order 1 to 5 in time. Optimal-order error estimates are proven. The error estimates are based on a consistency and stability analysis in an abstract framework, based on energy estimates exploiting the anti-symmetric structure of the second-order system.

In a recent work [Manucci, Unger, ArXiv e-print 2404.10511, 2024], the authors propose using two generalized Lyapunov equations (GLEs) to derive a balancing-based model order reduction~(MOR) method for a general class of switched differential-algebraic equations (DAEs). This work explains why these GLEs provide solutions suitable for MOR by showing that the image set of the solutions of the two GLEs always encloses the reachable and observable set of a suitably defined switched system with the same input to output map of the switched DAE system.

Stochastic Volterra equations (SVEs) serve as mathematical models for the time evolutions of random systems with memory effects and irregular behaviour. We introduce neural stochastic Volterra equations as a physics-inspired architecture, generalizing the class of neural stochastic differential equations, and provide some theoretical foundation. Numerical experiments on various SVEs, like the disturbed pendulum equation, the generalized Ornstein--Uhlenbeck process and the rough Heston model are presented, comparing the performance of neural SVEs, neural SDEs and Deep Operator Networks (DeepONets).

The weak maximum principle of finite element methods for parabolic equations is proved for both semi-discretization in space and fully discrete methods with $k$-step backward differentiation formulae for $k = 1,... ,6$, on a two-dimensional general polygonal domain or a three-dimensional convex polyhedral domain. The semi-discrete result is established via a dyadic decomposition argument and local energy estimates in which the nonsmoothness of the domain can be handled. The fully discrete result for multistep backward differentiation formulae is proved by utilizing the solution representation via the discrete Laplace transform and the resolvent estimates, which are inspired by the analysis of convolutional quadrature for parabolic and fractional-order partial differential equations.

We propose an optimization algorithm to improve the design and performance of quantum communication networks. When physical architectures become too complex for analytical methods, numerical simulation becomes essential to study quantum network behavior. Although highly informative, these simulations involve complex numerical functions without known analytical forms, making traditional optimization techniques that assume continuity, differentiability, or convexity inapplicable. Additionally, quantum network simulations are computationally demanding, rendering global approaches like Simulated Annealing or genetic algorithms, which require extensive function evaluations, impractical. We introduce a more efficient optimization workflow using machine learning models, which serve as surrogates for a given objective function. We demonstrate the effectiveness of our approach by applying it to three well-known optimization problems in quantum networking: quantum memory allocation for multiple network nodes, tuning an experimental parameter in all physical links of a quantum entanglement switch, and finding efficient protocol settings within a large asymmetric quantum network. The solutions found by our algorithm consistently outperform those obtained with our baseline approaches -- Simulated Annealing and Bayesian optimization -- in the allotted time limit by up to 18\% and 20\%, respectively. Our framework thus allows for more comprehensive quantum network studies, integrating surrogate-assisted optimization with existing quantum network simulators.

The mathematical formulation of sign-changing problems involves a linear second-order partial differential equation in the divergence form, where the coefficient can assume positive and negative values in different subdomains. These problems find their physical background in negative-index metamaterials, either as inclusions embedded into common materials as the matrix or vice versa. In this paper, we propose a numerical method based on the constraint energy minimizing generalized multiscale finite element method (CEM-GMsFEM) specifically designed for sign-changing problems. The construction of auxiliary spaces in the original CEM-GMsFEM is tailored to accommodate the sign-changing setting. The numerical results demonstrate the effectiveness of the proposed method in handling sophisticated coefficient profiles and the robustness of coefficient contrast ratios. Under several technical assumptions and by applying the \texttt{T}-coercivity theory, we establish the inf-sup stability and provide an a priori error estimate for the proposed method.

A high-order numerical method is developed for solving the Cahn-Hilliard-Navier-Stokes equations with the Flory-Huggins potential. The scheme is based on the $Q_k$ finite element with mass lumping on rectangular grids, the second-order convex splitting method, and the pressure correction method. The unique solvability, unconditional stability, and bound-preserving properties are rigorously established. The key to bound-preservation is the discrete $L^1$ estimate of the singular potential. Ample numerical experiments are performed to validate the desired properties of the proposed numerical scheme.

Areas of computational mechanics such as uncertainty quantification and optimization usually involve repeated evaluation of numerical models that represent the behavior of engineering systems. In the case of complex nonlinear systems however, these models tend to be expensive to evaluate, making surrogate models quite valuable. Artificial neural networks approximate systems very well by taking advantage of the inherent information of its given training data. In this context, this paper investigates the improvement of the training process by including sensitivity information, which are partial derivatives w.r.t. inputs, as outlined by Sobolev training. In computational mechanics, sensitivities can be applied to neural networks by expanding the training loss function with additional loss terms, thereby improving training convergence resulting in lower generalisation error. This improvement is shown in two examples of linear and non-linear material behavior. More specifically, the Sobolev designed loss function is expanded with residual weights adjusting the effect of each loss on the training step. Residual weighting is the given scaling to the different training data, which in this case are response and sensitivities. These residual weights are optimized by an adaptive scheme, whereby varying objective functions are explored, with some showing improvements in accuracy and precision of the general training convergence.

We perform numerical investigation of nearly self-similar blowup of generalized axisymmetric Navier-Stokes equations and Boussinesq system with a time-dependent fractional dimension. The dynamic change of the space dimension is proportional to the ratio R(t)/Z(t), where (R(t),Z(t)) is the position at which the maximum vorticity achieves its global maximum. This choice of space dimension is to ensure that the advection along the r-direction has the same scaling as that along the z-direction, thus preventing formation of two-scale solution structure. For the generalized axisymmetric Navier-Stokes equations with solution dependent viscosity, we show that the solution develops a self-similar blowup with dimension equal to 3.188 and the self-similar profile satisfies the axisymmetric Navier-Stokes equations with constant viscosity. We also study the nearly self-similar blowup of the axisymmetric Boussinesq system with constant viscosity. The generalized axisymmetric Boussinesq system preserves almost all the known properties of the 3D Navier-Stokes equations except for the conservation of angular momentum. We present convincing numerical evidence that the generalized axisymmetric Boussinesq system develops a stable nearly self-similar blowup solution with maximum vorticity increased by O(10^{30}).

We present a new algorithm for solving linear-quadratic regulator (LQR) problems with linear equality constraints, also known as constrained LQR (CLQR) problems. Our method's sequential runtime is linear in the number of stages and constraints, and its parallel runtime is logarithmic in the number of stages. The main technical contribution of this paper is the derivation of parallelizable techniques for eliminating the linear equality constraints while preserving the standard positive (semi-)definiteness requirements of LQR problems.

北京阿比特科技有限公司