亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

To integrate large systems of nonlinear differential equations in time, we consider a variant of nonlinear waveform relaxation (also known as dynamic iteration or Picard-Lindel\"of iteration), where at each iteration a linear inhomogeneous system of differential equations has to be solved. This is done by the exponential block Krylov subspace (EBK) method. Thus, we have an inner-outer iterative method, where iterative approximations are determined over a certain time interval, with no time stepping involved. This approach has recently been shown to be efficient as a time-parallel integrator within the PARAEXP framework. In this paper, convergence behavior of this method is assessed theoretically and practically. We examine efficiency of the method by testing it on nonlinear Burgers, three-dimensional Liouville-Bratu-Gelfand, and three-dimensional nonlinear heat conduction equations and comparing its performance with that of conventional time-stepping integrators.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

This paper introduces a new numerical scheme for a system that includes evolution equations describing a perfect plasticity model with a time-dependent yield surface. We demonstrate that the solution to the proposed scheme is stable under suitable norms. Moreover, the stability leads to the existence of an exact solution, and we also prove that the solution to the proposed scheme converges strongly to the exact solution under suitable norms.

We present a numerical scheme for the solution of the initial-value problem for the ``bad'' Boussinesq equation. The accuracy of the scheme is tested by comparison with exact soliton solutions as well as with recently obtained asymptotic formulas for the solution.

In this short survey we describe recent advances on word equations with non-rational constraints in groups and monoids, highlighting the important role that formal languages play in this area.

Underdetermined generalized absolute value equations (GAVE) has real applications. The underdetermined GAVE may have no solution, one solution, finitely multiple solutions or infinitely many solutions. This paper aims to give some sufficient conditions which guarantee the existence or nonexistence of solutions for the underdetermined GAVE. Particularly, sufficient conditions under which certain or each sign pattern possesses infinitely many solutions of the underdetermined GAVE are given. In addition, iterative methods are developed to solve a solution of the underdetermined GAVE. Some existing results about the square GAVE are extended.

We propose a new simple and explicit numerical scheme for time-homogeneous stochastic differential equations. The scheme is based on sampling increments at each time step from a skew-symmetric probability distribution, with the level of skewness determined by the drift and volatility of the underlying process. We show that as the step-size decreases the scheme converges weakly to the diffusion of interest. We then consider the problem of simulating from the limiting distribution of an ergodic diffusion process using the numerical scheme with a fixed step-size. We establish conditions under which the numerical scheme converges to equilibrium at a geometric rate, and quantify the bias between the equilibrium distributions of the scheme and of the true diffusion process. Notably, our results do not require a global Lipschitz assumption on the drift, in contrast to those required for the Euler--Maruyama scheme for long-time simulation at fixed step-sizes. Our weak convergence result relies on an extension of the theory of Milstein \& Tretyakov to stochastic differential equations with non-Lipschitz drift, which could also be of independent interest. We support our theoretical results with numerical simulations.

The increasing reliance on numerical methods for controlling dynamical systems and training machine learning models underscores the need to devise algorithms that dependably and efficiently navigate complex optimization landscapes. Classical gradient descent methods offer strong theoretical guarantees for convex problems; however, they demand meticulous hyperparameter tuning for non-convex ones. The emerging paradigm of learning to optimize (L2O) automates the discovery of algorithms with optimized performance leveraging learning models and data - yet, it lacks a theoretical framework to analyze convergence of the learned algorithms. In this paper, we fill this gap by harnessing nonlinear system theory. Specifically, we propose an unconstrained parametrization of all convergent algorithms for smooth non-convex objective functions. Notably, our framework is directly compatible with automatic differentiation tools, ensuring convergence by design while learning to optimize.

Based on the mathematical-physical model of pavement mechanics, a multilayer elastic system with interlayer friction conditions is constructed. Given the complex boundary conditions, the corresponding variational inequalities of the partial differential equations are derived, so that the problem can be analyzed under the variational framework. First, the existence and uniqueness of the solution of the variational inequality is proved; then the approximation error of the numerical solution based on the finite element method is analyzed, and when the finite element space satisfies certain approximation conditions, the convergence of the numerical solution is proved; finally, in the trivial finite element space, the convergence order of the numerical solution is derived. The above conclusions provide basic theoretical support for solving the displacement-strain problem of multilayer elastic systems under the framework of variational inequalities.

We propose a two-step procedure to model and predict high-dimensional functional time series, where the number of function-valued time series $p$ is large in relation to the length of time series $n$. Our first step performs an eigenanalysis of a positive definite matrix, which leads to a one-to-one linear transformation for the original high-dimensional functional time series, and the transformed curve series can be segmented into several groups such that any two subseries from any two different groups are uncorrelated both contemporaneously and serially. Consequently in our second step those groups are handled separately without the information loss on the overall linear dynamic structure. The second step is devoted to establishing a finite-dimensional dynamical structure for all the transformed functional time series within each group. Furthermore the finite-dimensional structure is represented by that of a vector time series. Modelling and forecasting for the original high-dimensional functional time series are realized via those for the vector time series in all the groups. We investigate the theoretical properties of our proposed methods, and illustrate the finite-sample performance through both extensive simulation and two real datasets.

Approximation of solutions to partial differential equations (PDE) is an important problem in computational science and engineering. Using neural networks as an ansatz for the solution has proven a challenge in terms of training time and approximation accuracy. In this contribution, we discuss how sampling the hidden weights and biases of the ansatz network from data-agnostic and data-dependent probability distributions allows us to progress on both challenges. In most examples, the random sampling schemes outperform iterative, gradient-based optimization of physics-informed neural networks regarding training time and accuracy by several orders of magnitude. For time-dependent PDE, we construct neural basis functions only in the spatial domain and then solve the associated ordinary differential equation with classical methods from scientific computing over a long time horizon. This alleviates one of the greatest challenges for neural PDE solvers because it does not require us to parameterize the solution in time. For second-order elliptic PDE in Barron spaces, we prove the existence of sampled networks with $L^2$ convergence to the solution. We demonstrate our approach on several time-dependent and static PDEs. We also illustrate how sampled networks can effectively solve inverse problems in this setting. Benefits compared to common numerical schemes include spectral convergence and mesh-free construction of basis functions.

In this paper, to address the optimization problem on a compact matrix manifold, we introduce a novel algorithmic framework called the Transformed Gradient Projection (TGP) algorithm, using the projection onto this compact matrix manifold. Compared with the existing algorithms, the key innovation in our approach lies in the utilization of a new class of search directions and various stepsizes, including the Armijo, nonmonotone Armijo, and fixed stepsizes, to guide the selection of the next iterate. Our framework offers flexibility by encompassing the classical gradient projection algorithms as special cases, and intersecting the retraction-based line-search algorithms. Notably, our focus is on the Stiefel or Grassmann manifold, revealing that many existing algorithms in the literature can be seen as specific instances within our proposed framework, and this algorithmic framework also induces several new special cases. Then, we conduct a thorough exploration of the convergence properties of these algorithms, considering various search directions and stepsizes. To achieve this, we extensively analyze the geometric properties of the projection onto compact matrix manifolds, allowing us to extend classical inequalities related to retractions from the literature. Building upon these insights, we establish the weak convergence, convergence rate, and global convergence of TGP algorithms under three distinct stepsizes. In cases where the compact matrix manifold is the Stiefel or Grassmann manifold, our convergence results either encompass or surpass those found in the literature. Finally, through a series of numerical experiments, we observe that the TGP algorithms, owing to their increased flexibility in choosing search directions, outperform classical gradient projection and retraction-based line-search algorithms in several scenarios.

北京阿比特科技有限公司