亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Peskin's Immersed Boundary (IB) model and method are among the most popular modeling tools and numerical methods. The IB method has been known to be first order accurate in the velocity. However, almost no rigorous theoretical proof can be found in the literature for Stokes equations with a prescribed velocity boundary condition. In this paper, it has been shown that the pressure of the Stokes equation has convergence order $O(\sqrt{h})$ in the $L^2$ norm while the velocity has $O(h)$ convergence in the infinity norm in two-dimensions (2D). The proofs are based on the idea of the immersed interface method, and the convergence proof of the IB method for elliptic interface problems \cite{li:mathcom}. The proof is intuitive and the conclusion can apply to different boundary conditions as long as the problem is well-posed. The proof process also provides an efficient way to decouple the system into three Helmholtz/Poisson equations without affecting the accuracy. A non-trivial numerical example is also provided to confirm the theoretical analysis.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · Adam · 散度 · 穩健性 · FAST ·
2021 年 10 月 12 日

Many popular adaptive gradient methods such as Adam and RMSProp rely on an exponential moving average (EMA) to normalize their stepsizes. While the EMA makes these methods highly responsive to new gradient information, recent research has shown that it also causes divergence on at least one convex optimization problem. We propose a novel method called Expectigrad, which adjusts stepsizes according to a per-component unweighted mean of all historical gradients and computes a bias-corrected momentum term jointly between the numerator and denominator. We prove that Expectigrad cannot diverge on every instance of the optimization problem known to cause Adam to diverge. We also establish a regret bound in the general stochastic nonconvex setting that suggests Expectigrad is less susceptible to gradient variance than existing methods are. Testing Expectigrad on several high-dimensional machine learning tasks, we find it often performs favorably to state-of-the-art methods with little hyperparameter tuning.

A fully discrete finite difference scheme for stochastic reaction-diffusion equations driven by a $1+1$-dimensional white noise is studied. The optimal strong rate of convergence is proved without posing any regularity assumption on the non-linear reaction term. The proof relies on stochastic sewing techniques.

We propose and study a temporal, and spatio-temporal discretisation of the 2D stochastic Navier--Stokes equations in bounded domains supplemented with no-slip boundary conditions. Considering additive noise, we base its construction on the related nonlinear random PDE, which is solved by a transform of the solution of the stochastic Navier--Stokes equations. We show strong rate (up to) $1$ in probability for a corresponding discretisation in space and time (and space-time). Convergence of order (up to) 1 in time was previously only known for linear SPDEs.

Implicit deep learning has received increasing attention recently due to the fact that it generalizes the recursive prediction rules of many commonly used neural network architectures. Its prediction rule is provided implicitly based on the solution of an equilibrium equation. Although a line of recent empirical studies has demonstrated its superior performances, the theoretical understanding of implicit neural networks is limited. In general, the equilibrium equation may not be well-posed during the training. As a result, there is no guarantee that a vanilla (stochastic) gradient descent (SGD) training nonlinear implicit neural networks can converge. This paper fills the gap by analyzing the gradient flow of Rectified Linear Unit (ReLU) activated implicit neural networks. For an $m$-width implicit neural network with ReLU activation and $n$ training samples, we show that a randomly initialized gradient descent converges to a global minimum at a linear rate for the square loss function if the implicit neural network is \textit{over-parameterized}. It is worth noting that, unlike existing works on the convergence of (S)GD on finite-layer over-parameterized neural networks, our convergence results hold for implicit neural networks, where the number of layers is \textit{infinite}.

We present a new enriched Galerkin (EG) scheme for the Stokes equations based on piecewise linear elements for the velocity unknowns and piecewise constant elements for the pressure. The proposed EG method augments the conforming piecewise linear space for velocity by adding an additional degree of freedom which corresponds to one discontinuous linear basis function per element. Thus, the total number of degrees of freedom is significantly reduced in comparison with standard conforming, non-conforming, and discontinuous Galerkin schemes for the Stokes equation. We show the well-posedness of the new EG approach and prove that the scheme converges optimally. For the solution of the resulting large-scale indefinite linear systems we propose robust block preconditioners, yielding scalable results independent of the discretization and physical parameters. Numerical results confirm the convergence rates of the discretization and also the robustness of the linear solvers for a variety of test problems.

One- and multi-dimensional stochastic Maxwell equations with additive noise are considered in this paper. It is known that such system can be written in the multi-symplectic structure, and the stochastic energy increases linearly in time. High order discontinuous Galerkin methods are designed for the stochastic Maxwell equations with additive noise, and we show that the proposed methods satisfy the discrete form of the stochastic energy linear growth property and preserve the multi-symplectic structure on the discrete level. Optimal error estimate of the semi-discrete DG method is also analyzed. The fully discrete methods are obtained by coupling with symplectic temporal discretizations. One- and two-dimensional numerical results are provided to demonstrate the performance of the proposed methods, and optimal error estimates and linear growth of the discrete energy can be observed for all cases.

In this article, I introduce the differential equation model and review their frequentist and Bayesian computation methods. A numerical example of the FitzHugh-Nagumo model is given.

We argue that proven exponential upper bounds on runtimes, an established area in classic algorithms, are interesting also in heuristic search and we prove several such results. We show that any of the algorithms randomized local search, Metropolis algorithm, simulated annealing, and (1+1) evolutionary algorithm can optimize any pseudo-Boolean weakly monotonic function under a large set of noise assumptions in a runtime that is at most exponential in the problem dimension~$n$. This drastically extends a previous such result, limited to the (1+1) EA, the LeadingOnes function, and one-bit or bit-wise prior noise with noise probability at most $1/2$, and at the same time simplifies its proof. With the same general argument, among others, we also derive a sub-exponential upper bound for the runtime of the $(1,\lambda)$ evolutionary algorithm on the OneMax problem when the offspring population size $\lambda$ is logarithmic, but below the efficiency threshold. To show that our approach can also deal with non-trivial parent population sizes, we prove an exponential upper bound for the runtime of the mutation-based version of the simple genetic algorithm on the OneMax benchmark, matching a known exponential lower bound.

This chapter describes how gradient flows and nonlinear power methods in Banach spaces can be used to solve nonlinear eigenvector-dependent eigenvalue problems, and how convergence of (discretized) approximations can be verified. We review several flows from literature, which were proposed to compute nonlinear eigenfunctions, and show that they all relate to normalized gradient flows. Furthermore, we show that the implicit Euler discretization of gradient flows gives rise to a nonlinear power method of the proximal operator and prove their convergence to nonlinear eigenfunctions. Finally, we prove that $\Gamma$-convergence of functionals implies convergence of their ground states, which is important for discrete approximations.

We propose a new method of estimation in topic models, that is not a variation on the existing simplex finding algorithms, and that estimates the number of topics K from the observed data. We derive new finite sample minimax lower bounds for the estimation of A, as well as new upper bounds for our proposed estimator. We describe the scenarios where our estimator is minimax adaptive. Our finite sample analysis is valid for any number of documents (n), individual document length (N_i), dictionary size (p) and number of topics (K), and both p and K are allowed to increase with n, a situation not handled well by previous analyses. We complement our theoretical results with a detailed simulation study. We illustrate that the new algorithm is faster and more accurate than the current ones, although we start out with a computational and theoretical disadvantage of not knowing the correct number of topics K, while we provide the competing methods with the correct value in our simulations.

北京阿比特科技有限公司