亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This work aims to provide a comprehensive and unified numerical analysis for non linear system of parabolic variational inequalities (PVIs) subject to Dirichlet boundary condition. This analysis enables us to establish an existence of the exact solution to the considered model and to prove the convergence for the approximate solution and its approximate gradient. Our results are applicable for several conforming and non conforming numerical schemes.

相關內容

Iterative regularization exploits the implicit bias of an optimization algorithm to regularize ill-posed problems. Constructing algorithms with such built-in regularization mechanisms is a classic challenge in inverse problems but also in modern machine learning, where it provides both a new perspective on algorithms analysis, and significant speed-ups compared to explicit regularization. In this work, we propose and study the first iterative regularization procedure able to handle biases described by non smooth and non strongly convex functionals, prominent in low-complexity regularization. Our approach is based on a primal-dual algorithm of which we analyze convergence and stability properties, even in the case where the original problem is unfeasible. The general results are illustrated considering the special case of sparse recovery with the $\ell_1$ penalty. Our theoretical results are complemented by experiments showing the computational benefits of our approach.

It is common practice to use Laplace approximations to compute marginal likelihoods in Bayesian versions of generalised linear models (GLM). Marginal likelihoods combined with model priors are then used in different search algorithms to compute the posterior marginal probabilities of models and individual covariates. This allows performing Bayesian model selection and model averaging. For large sample sizes, even the Laplace approximation becomes computationally challenging because the optimisation routine involved needs to evaluate the likelihood on the full set of data in multiple iterations. As a consequence, the algorithm is not scalable for large datasets. To address this problem, we suggest using a version of a popular batch stochastic gradient descent (BSGD) algorithm for estimating the marginal likelihood of a GLM by subsampling from the data. We further combine the algorithm with Markov chain Monte Carlo (MCMC) based methods for Bayesian model selection and provide some theoretical results on the convergence of the estimates. Finally, we report results from experiments illustrating the performance of the proposed algorithm.

We use a numerical-analytic technique to construct a sequence of successive approximations to the solution of a system of fractional differential equations, subject to Dirichlet boundary conditions. We prove the uniform convergence of the sequence of approximations to a limit function, which is the unique solution to the boundary value problem under consideration, and give necessary and sufficient conditions for the existence of solutions. The obtained theoretical results are confirmed by a model example.

We propose a method for quantifying uncertainty in high-dimensional PDE systems with random parameters, where the number of solution evaluations is small. Parametric PDE solutions are often approximated using a spectral decomposition based on polynomial chaos expansions. For the class of systems we consider (i.e., high dimensional with limited solution evaluations) the coefficients are given by an underdetermined linear system in a regression formulation. This implies additional assumptions, such as sparsity of the coefficient vector, are needed to approximate the solution. Here, we present an approach where we assume the coefficients are close to the range of a generative model that maps from a low to a high dimensional space of coefficients. Our approach is inspired be recent work examining how generative models can be used for compressed sensing in systems with random Gaussian measurement matrices. Using results from PDE theory on coefficient decay rates, we construct an explicit generative model that predicts the polynomial chaos coefficient magnitudes. The algorithm we developed to find the coefficients, which we call GenMod, is composed of two main steps. First, we predict the coefficient signs using Orthogonal Matching Pursuit. Then, we assume the coefficients are within a sparse deviation from the range of a sign-adjusted generative model. This allows us to find the coefficients by solving a nonconvex optimization problem, over the input space of the generative model and the space of sparse vectors. We obtain theoretical recovery results for a Lipschitz continuous generative model and for a more specific generative model, based on coefficient decay rate bounds. We examine three high-dimensional problems and show that, for all three examples, the generative model approach outperforms sparsity promoting methods at small sample sizes.

We introduce a family of stochastic optimization methods based on the Runge-Kutta-Chebyshev (RKC) schemes. The RKC methods are explicit methods originally designed for solving stiff ordinary differential equations by ensuring that their stability regions are of maximal size.In the optimization context, this allows for larger step sizes (learning rates) and better robustness compared to e.g. the popular stochastic gradient descent method. Our main contribution is a convergence proof for essentially all stochastic Runge-Kutta optimization methods. This shows convergence in expectation with an optimal sublinear rate under standard assumptions of strong convexity and Lipschitz-continuous gradients. For non-convex objectives, we get convergence to zero in expectation of the gradients. The proof requires certain natural conditions on the Runge-Kutta coefficients, and we further demonstrate that the RKC schemes satisfy these. Finally, we illustrate the improved stability properties of the methods in practice by performing numerical experiments on both a small-scale test example and on a problem arising from an image classification application in machine learning.

The scalar auxiliary variable (SAV) approach \cite{shen2018scalar} and its generalized version GSAV proposed in \cite{huang2020highly} are very popular methods to construct efficient and accurate energy stable schemes for nonlinear dissipative systems. However, the discrete value of the SAV is not directly linked to the free energy of the dissipative system, and may lead to inaccurate solutions if the time step is not sufficiently small. Inspired by the relaxed SAV method proposed in \cite{jiang2022improving} for gradient flows, we propose in this paper a generalized SAV approach with relaxation (R-GSAV) for general dissipative systems. The R-GSAV approach preserves all the advantages of the GSAV appraoch, in addition, it dissipates a modified energy that is directly linked to the original free energy. We prove that the $k$-th order implicit-explicit (IMEX) schemes based on R-GSAV are unconditionally energy stable, and we carry out a rigorous error analysis for $k=1,2,3,4,5$. We present ample numerical results to demonstrate the improved accuracy and effectiveness of the R-GSAV approach.

Two-component deterministic- and random-scan Gibbs samplers are studied using the theory of two projections. It is found that in terms of asymptotic variance, the two-component random-scan Gibbs sampler is never much worse, and could be considerably better than its deterministic-scan counterpart, provided that the selection probability is appropriately chosen. This is especially the case when there is a large discrepancy in computation cost between the two components. Together with previous results regarding the convergence rates of two-component Gibbs Markov chains, results herein suggest one may use the deterministic-scan version in the burn-in stage, and switch to the random-scan version in the estimation stage. The theory of two projections can also be utilized to study other properties of variants of two-component Gibbs samplers. As a side product, some general formulas for characterizing the convergence rate of a possibly non-reversible or time-inhomogeneous Markov chain in an operator theoretic framework are developed.

Casting nonlocal problems in variational form and discretizing them with the finite element (FE) method facilitates the use of nonlocal vector calculus to prove well-posedeness, convergence, and stability of such schemes. Employing an FE method also facilitates meshing of complicated domain geometries and coupling with FE methods for local problems. However, nonlocal weak problems involve the computation of a double-integral, which is computationally expensive and presents several challenges. In particular, the inner integral of the variational form associated with the stiffness matrix is defined over the intersections of FE mesh elements with a ball of radius $\delta$, where $\delta$ is the range of nonlocal interaction. Identifying and parameterizing these intersections is a nontrivial computational geometry problem. In this work, we propose a quadrature technique where the inner integration is performed using quadrature points distributed over the full ball, without regard for how it intersects elements, and weights are computed based on the generalized moving least squares method. Thus, as opposed to all previously employed methods, our technique does not require element-by-element integration and fully circumvents the computation of element-ball intersections. This paper considers one- and two-dimensional implementations of piecewise linear continuous FE approximations, focusing on the case where the element size h and the nonlocal radius $\delta$ are proportional, as is typical of practical computations. When boundary conditions are treated carefully and the outer integral of the variational form is computed accurately, the proposed method is asymptotically compatible in the limit of $h \sim \delta \to 0$, featuring at least first-order convergence in L^2 for all dimensions, using both uniform and nonuniform grids.

We study the numerical approximation by space-time finite element methods of a multi-physics system coupling hyperbolic elastodynamics with parabolic transport and modelling poro- and thermoelasticity. The equations are rewritten as a first-order system in time. Discretizations by continuous Galerkin methods in space and time with inf-sup stable pairs of finite elements for the spatial approximation of the unknowns are investigated. Optimal order error estimates of energy-type are proven. Superconvergence at the time nodes is addressed briefly. The error analysis can be extended to discontinuous and enriched Galerkin space discretizations. The error estimates are confirmed by numerical experiments.

Singular value decomposition (SVD) is the mathematical basis of principal component analysis (PCA). Together, SVD and PCA are one of the most widely used mathematical formalism/decomposition in machine learning, data mining, pattern recognition, artificial intelligence, computer vision, signal processing, etc. In recent applications, regularization becomes an increasing trend. In this paper, we present a regularized SVD (RSVD), present an efficient computational algorithm, and provide several theoretical analysis. We show that although RSVD is non-convex, it has a closed-form global optimal solution. Finally, we apply RSVD to the application of recommender system and experimental result show that RSVD outperforms SVD significantly.

北京阿比特科技有限公司