亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Consider the approximation of stochastic Allen-Cahn-type equations (i.e. $1+1$-dimensional space-time white noise-driven stochastic PDEs with polynomial nonlinearities $F$ such that $F(\pm \infty)=\mp \infty$) by a fully discrete space-time explicit finite difference scheme. The consensus in literature, supported by rigorous lower bounds, is that strong convergence rate $1/2$ with respect to the parabolic grid meshsize is expected to be optimal. We show that one can reach almost sure convergence rate $1$ (and no better) when measuring the error in appropriate negative Besov norms, by temporarily `pretending' that the SPDE is singular.

相關內容

Recent progress was made in characterizing the generalization error of gradient methods for general convex loss by the learning theory community. In this work, we focus on how training longer might affect generalization in smooth stochastic convex optimization (SCO) problems. We first provide tight lower bounds for general non-realizable SCO problems. Furthermore, existing upper bound results suggest that sample complexity can be improved by assuming the loss is realizable, i.e. an optimal solution simultaneously minimizes all the data points. However, this improvement is compromised when training time is long and lower bounds are lacking. Our paper examines this observation by providing excess risk lower bounds for gradient descent (GD) and stochastic gradient descent (SGD) in two realizable settings: 1) realizable with $T = O(n)$, and (2) realizable with $T = \Omega(n)$, where $T$ denotes the number of training iterations and $n$ is the size of the training dataset. These bounds are novel and informative in characterizing the relationship between $T$ and $n$. In the first small training horizon case, our lower bounds almost tightly match and provide the first optimal certificates for the corresponding upper bounds. However, for the realizable case with $T = \Omega(n)$, a gap exists between the lower and upper bounds. We provide a conjecture to address this problem, that the gap can be closed by improving upper bounds, which is supported by our analyses in one-dimensional and linear regression scenarios.

Motivated by numerical modeling of ultrasound waves, we investigate robust conforming finite element discretizations of quasilinear and possibly nonlocal equations of Westervelt type. These wave equations involve either a strong dissipation or damping of fractional-derivative type and we unify them into one class by introducing a memory kernel that satisfies non-restrictive regularity and positivity assumptions. As the involved damping parameter is relatively small and can become negligible in certain (inviscid) media, it is important to develop methods that remain stable as the said parameter vanishes. To this end, the contributions of this work are twofold. First, we determine sufficient conditions under which conforming finite element discretizations of (non)local Westervelt equations can be made robust with respect to the dissipation parameter. Secondly, we establish the rate of convergence of the semi-discrete solutions in the singular vanishing dissipation limit. The analysis hinges upon devising appropriate energy functionals for the semi-discrete solutions that remain uniformly bounded with respect to the damping parameter.

A class of implicit Milstein type methods is introduced and analyzed in the present article for stochastic differential equations (SDEs) with non-globally Lipschitz drift and diffusion coefficients. By incorporating a pair of method parameters $\theta, \eta \in [0, 1]$ into both the drift and diffusion parts, the new schemes are indeed a kind of drift-diffusion double implicit methods. Within a general framework, we offer upper mean-square error bounds for the proposed schemes, based on certain error terms only getting involved with the exact solution processes. Such error bounds help us to easily analyze mean-square convergence rates of the schemes, without relying on a priori high-order moment estimates of numerical approximations. Putting further globally polynomial growth condition, we successfully recover the expected mean-square convergence rate of order one for the considered schemes with $\theta \in [\tfrac12, 1], \eta \in [0, 1]$. Also, some of the proposed schemes are applied to solve three SDE models evolving in the positive domain $(0, \infty)$. More specifically, the particular drift-diffusion implicit Milstein method ($ \theta = \eta = 1 $) is utilized to approximate the Heston $\tfrac32$-volatility model and the stochastic Lotka-Volterra competition model. The semi-implicit Milstein method ($\theta =1, \eta = 0$) is used to solve the Ait-Sahalia interest rate model. Thanks to the previously obtained error bounds, we reveal the optimal mean-square convergence rate of the positivity preserving schemes under more relaxed conditions, compared with existing relevant results in the literature. Numerical examples are also reported to confirm the previous findings.

We prove a weak rate of convergence of a fully discrete scheme for stochastic Cahn--Hilliard equation with additive noise, where the spectral Galerkin method is used in space and the backward Euler method is used in time. Compared with the Allen--Cahn type stochastic partial differential equation, the error analysis here is much more sophisticated due to the presence of the unbounded operator in front of the nonlinear term. To address such issues, a novel and direct approach has been exploited which does not rely on a Kolmogorov equation but on the integration by parts formula from Malliavin calculus. To the best of our knowledge, the rates of weak convergence are revealed in the stochastic Cahn--Hilliard equation setting for the first time.

This paper considers the Cauchy problem for the nonlinear dynamic string equation of Kirchhoff-type with time-varying coefficients. The objective of this work is to develop a temporal discretization algorithm capable of approximating a solution to this initial-boundary value problem. To this end, a symmetric three-layer semi-discrete scheme is employed with respect to the temporal variable, wherein the value of a nonlinear term is evaluated at the middle node point. This approach enables the numerical solutions per temporal step to be obtained by inverting the linear operators, yielding a system of second-order linear ordinary differential equations. Local convergence of the proposed scheme is established, and it achieves quadratic convergence concerning the step size of the discretization of time on the local temporal interval.

In this paper we prove convergence rates for time discretisation schemes for semi-linear stochastic evolution equations with additive or multiplicative Gaussian noise, where the leading operator $A$ is the generator of a strongly continuous semigroup $S$ on a Hilbert space $X$, and the focus is on non-parabolic problems. The main results are optimal bounds for the uniform strong error $$\mathrm{E}_{k}^{\infty} := \Big(\mathbb{E} \sup_{j\in \{0, \ldots, N_k\}} \|U(t_j) - U^j\|^p\Big)^{1/p},$$ where $p \in [2,\infty)$, $U$ is the mild solution, $U^j$ is obtained from a time discretisation scheme, $k$ is the step size, and $N_k = T/k$. The usual schemes such as splitting/exponential Euler, implicit Euler, and Crank-Nicolson, etc.\ are included as special cases. Under conditions on the nonlinearity and the noise we show - $\mathrm{E}_{k}^{\infty}\lesssim k \log(T/k)$ (linear equation, additive noise, general $S$); - $\mathrm{E}_{k}^{\infty}\lesssim \sqrt{k} \log(T/k)$ (nonlinear equation, multiplicative noise, contractive $S$); - $\mathrm{E}_{k}^{\infty}\lesssim k \log(T/k)$ (nonlinear wave equation, multiplicative noise). The logarithmic factor can be removed if the splitting scheme is used with a (quasi)-contractive $S$. The obtained bounds coincide with the optimal bounds for SDEs. Most of the existing literature is concerned with bounds for the simpler pointwise strong error $$\mathrm{E}_k:=\bigg(\sup_{j\in \{0,\ldots,N_k\}}\mathbb{E} \|U(t_j) - U^{j}\|^p\bigg)^{1/p}.$$ Applications to Maxwell equations, Schr\"odinger equations, and wave equations are included. For these equations our results improve and reprove several existing results with a unified method.

The generalized coloring numbers of Kierstead and Yang (Order 2003) offer an algorithmically-useful characterization of graph classes with bounded expansion. In this work, we consider the hardness and approximability of these parameters. First, we complete the work of Grohe et al. (WG 2015) by showing that computing the weak 2-coloring number is NP-hard. Our approach further establishes that determining if a graph has weak $r$-coloring number at most $k$ is para-NP-hard when parameterized by $k$ for all $r \geq 2$. We adapt this to determining if a graph has $r$-coloring number at most $k$ as well, proving para-NP-hardness for all $r \geq 2$. Para-NP-hardness implies that no XP algorithm (runtime $O(n^{f(k)})$) exists for testing if a generalized coloring number is at most $k$. Moreover, there exists a constant $c$ such that it is NP-hard to approximate the generalized coloring numbers within a factor of $c$. To complement these results, we give an approximation algorithm for the generalized coloring numbers, improving both the runtime and approximation factor of the existing approach of Dvo\v{r}\'{a}k (EuJC 2013). We prove that greedily ordering vertices with small estimated backconnectivity achieves a $(k-1)^{r-1}$-approximation for the $r$-coloring number and an $O(k^{r-1})$-approximation for the weak $r$-coloring number.

The time-marching strategy, which propagates the solution from one time step to the next, is a natural strategy for solving time-dependent differential equations on classical computers, as well as for solving the Hamiltonian simulation problem on quantum computers. For more general linear differential equations, a time-marching based quantum solver can suffer from exponentially vanishing success probability with respect to the number of time steps and is thus considered impractical. We solve this problem by repeatedly invoking a technique called the uniform singular value amplification, and the overall success probability can be lower bounded by a quantity that is independent of the number of time steps. The success probability can be further improved using a compression gadget lemma. This provides a path of designing quantum differential equation solvers that is alternative to those based on quantum linear systems algorithms (QLSA). We demonstrate the performance of the time-marching strategy with a high-order integrator based on the truncated Dyson series. The complexity of the algorithm depends linearly on the amplification ratio, which quantifies the deviation from a unitary dynamics. We prove that the linear dependence on the amplification ratio attains the query complexity lower bound and thus cannot be improved in the worst case. This algorithm also surpasses existing QLSA based solvers in three aspects: (1) the coefficient matrix $A(t)$ does not need to be diagonalizable. (2) $A(t)$ can be non-smooth, and is only of bounded variation. (3) It can use fewer queries to the initial state. Finally, we demonstrate the time-marching strategy with a first-order truncated Magnus series, while retaining the aforementioned benefits. Our analysis also raises some open questions concerning the differences between time-marching and QLSA based methods for solving differential equations.

This paper is concerned with low-rank matrix optimization, which has found a wide range of applications in machine learning. This problem in the special case of matrix sensing has been studied extensively through the notion of Restricted Isometry Property (RIP), leading to a wealth of results on the geometric landscape of the problem and the convergence rate of common algorithms. However, the existing results can handle the problem in the case with a general objective function subject to noisy data only when the RIP constant is close to 0. In this paper, we develop a new mathematical framework to solve the above-mentioned problem with a far less restrictive RIP constant. We prove that as long as the RIP constant of the noiseless objective is less than $1/3$, any spurious local solution of the noisy optimization problem must be close to the ground truth solution. By working through the strict saddle property, we also show that an approximate solution can be found in polynomial time. We characterize the geometry of the spurious local minima of the problem in a local region around the ground truth in the case when the RIP constant is greater than $1/3$. Compared to the existing results in the literature, this paper offers the strongest RIP bound and provides a complete theoretical analysis on the global and local optimization landscapes of general low-rank optimization problems under random corruptions from any finite-variance family.

We present DiffXPBD, a novel and efficient analytical formulation for the differentiable position-based simulation of compliant constrained dynamics (XPBD). Our proposed method allows computation of gradients of numerous parameters with respect to a goal function simultaneously leveraging a performant simulation model. The method is efficient, thus enabling differentiable simulations of high resolution geometries and degrees of freedom (DoFs). Collisions are naturally included in the framework. Our differentiable model allows a user to easily add additional optimization variables. Every control variable gradient requires the computation of only a few partial derivatives which can be computed using automatic differentiation code. We demonstrate the efficacy of the method with examples such as elastic material parameter estimation, initial value optimization, optimizing for underlying body shape and pose by only observing the clothing, and optimizing a time-varying external force sequence to match sparse keyframe shapes at specific times. Our approach demonstrates excellent efficiency and we demonstrate this on high resolution meshes with optimizations involving over 26 million degrees of freedom. Making an existing solver differentiable requires only a few modifications and the model is compatible with both modern CPU and GPU multi-core hardware.

北京阿比特科技有限公司