亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article, we propose and study a stochastic and relaxed preconditioned Douglas--Rachford splitting method to solve saddle-point problems that have separable dual variables. We prove the almost sure convergence of the iteration sequences in Hilbert spaces for a class of convex-concave and nonsmooth saddle-point problems. We also provide the sublinear convergence rate for the ergodic sequence concerning the expectation of the restricted primal-dual gap functions. Numerical experiments show the high efficiency of the proposed stochastic and relaxed preconditioned Douglas--Rachford splitting methods.

相關內容

After nearly two decades of research, the question of a quantum PCP theorem for quantum Constraint Satisfaction Problems (CSPs) remains wide open. As a result, proving QMA-hardness of approximation for ground state energy estimation has remained elusive. Recently, it was shown [Bittel, Gharibian, Kliesch, CCC 2023] that a natural problem involving variational quantum circuits is QCMA-hard to approximate within ratio N^(1-eps) for any eps > 0 and N the input size. Unfortunately, this problem was not related to quantum CSPs, leaving the question of hardness of approximation for quantum CSPs open. In this work, we show that if instead of focusing on ground state energies, one considers computing properties of the ground space, QCMA-hardness of computing ground space properties can be shown. In particular, we show that it is (1) QCMA-complete within ratio N^(1-eps) to approximate the Ground State Connectivity problem (GSCON), and (2) QCMA-hard within the same ratio to estimate the amount of entanglement of a local Hamiltonian's ground state, denoted Ground State Entanglement (GSE). As a bonus, a simplification of our construction yields NP-completeness of approximation for a natural k-SAT reconfiguration problem, to be contrasted with the recent PCP-based PSPACE hardness of approximation results for a different definition of k-SAT reconfiguration [Karthik C.S. and Manurangsi, 2023, and Hirahara, Ohsaka, STOC 2024].

In this paper, we propose a novel eigenpair-splitting method, inspired by the divide-and-conquer strategy, for solving the generalized eigenvalue problem arising from the Kohn-Sham equation. Unlike the commonly used domain decomposition approach in divide-and-conquer, which solves the problem on a series of subdomains, our eigenpair-splitting method focuses on solving a series of subequations defined on the entire domain. This method is realized through the integration of two key techniques: a multi-mesh technique for generating approximate spaces for the subequations, and a soft-locking technique that allows for the independent solution of eigenpairs. Numerical experiments show that the proposed eigenpair-splitting method can dramatically enhance simulation efficiency, and its potential towards practical applications is also demonstrated well through an example of the HOMO-LUMO gap calculation. Furthermore, the optimal strategy for grouping eigenpairs is discussed, and the possible improvements to the proposed method are also outlined.

A Riemannian geometric framework for Markov chain Monte Carlo (MCMC) is developed where using the Fisher-Rao metric on the manifold of probability density functions (pdfs), informed proposal densities for Metropolis-Hastings (MH) algorithms are constructed. We exploit the square-root representation of pdfs under which the Fisher-Rao metric boils down to the standard $L^2$ metric on the positive orthant of the unit hypersphere. The square-root representation allows us to easily compute the geodesic distance between densities, resulting in a straightforward implementation of the proposed geometric MCMC methodology. Unlike the random walk MH that blindly proposes a candidate state using no information about the target, the geometric MH algorithms move an uninformed base density (e.g., a random walk proposal density) towards different global/local approximations of the target density, allowing effective exploration of the distribution simultaneously at different granular levels of the state space. We compare the proposed geometric MH algorithm with other MCMC algorithms for various Markov chain orderings, namely the covariance, efficiency, Peskun, and spectral gap orderings. The superior performance of the geometric algorithms over other MH algorithms like the random walk Metropolis, independent MH, and variants of Metropolis adjusted Langevin algorithms is demonstrated in the context of various multimodal, nonlinear, and high dimensional examples. In particular, we use extensive simulation and real data applications to compare these algorithms for analyzing mixture models, logistic regression models, spatial generalized linear mixed models and ultra-high dimensional Bayesian variable selection models. A publicly available R package accompanies the article.

In this work is considered an elliptic problem, referred to as the Ventcel problem, involvinga second order term on the domain boundary (the Laplace-Beltrami operator). A variationalformulation of the Ventcel problem is studied, leading to a finite element discretization. Thefocus is on the construction of high order curved meshes for the discretization of the physicaldomain and on the definition of the lift operator, which is aimed to transform a functiondefined on the mesh domain into a function defined on the physical one. This lift is definedin a way as to satisfy adapted properties on the boundary, relatively to the trace operator.The Ventcel problem approximation is investigated both in terms of geometrical error and offinite element approximation error. Error estimates are obtained both in terms of the meshorder r $\ge$ 1 and to the finite element degree k $\ge$ 1, whereas such estimates usually have beenconsidered in the isoparametric case so far, involving a single parameter k = r. The numericalexperiments we led, both in dimension 2 and 3, allow us to validate the results obtained andproved on the a priori error estimates depending on the two parameters k and r. A numericalcomparison is made between the errors using the former lift definition and the lift defined inthis work establishing an improvement in the convergence rate of the error in the latter case.

In the present work, strong approximation errors are analyzed for both the spatial semi-discretization and the spatio-temporal fully discretization of stochastic wave equations (SWEs) with cubic polynomial nonlinearities and additive noises. The fully discretization is achieved by the standard Galerkin ffnite element method in space and a novel exponential time integrator combined with the averaged vector ffeld approach. The newly proposed scheme is proved to exactly satisfy a trace formula based on an energy functional. Recovering the convergence rates of the scheme, however, meets essential difffculties, due to the lack of the global monotonicity condition. To overcome this issue, we derive the exponential integrability property of the considered numerical approximations, by the energy functional. Armed with these properties, we obtain the strong convergence rates of the approximations in both spatial and temporal direction. Finally, numerical results are presented to verify the previously theoretical findings.

This study presents a scalable Bayesian estimation algorithm for sparse estimation in exploratory item factor analysis based on a classical Bayesian estimation method, namely Bayesian joint modal estimation (BJME). BJME estimates the model parameters and factor scores that maximize the complete-data joint posterior density. Simulation studies show that the proposed algorithm has high computational efficiency and accuracy in variable selection over latent factors and the recovery of the model parameters. Moreover, we conducted a real data analysis using large-scale data from a psychological assessment that targeted the Big Five personality traits. This result indicates that the proposed algorithm achieves computationally efficient parameter estimation and extracts the interpretable factor loading structure.

In the context of Discontinuous Galerkin methods, we study approximations of nonlinear variational problems associated with convex energies. We propose element-wise nonconforming finite element methods to discretize the continuous minimisation problem. Using $\Gamma$-convergence arguments we show that the discrete minimisers converge to the unique minimiser of the continuous problem as the mesh parameter tends to zero, under the additional contribution of appropriately defined penalty terms at the level of the discrete energies. We finally substantiate the feasibility of our methods by numerical examples.

The gradient bounds of generalized barycentric coordinates play an essential role in the $H^1$ norm approximation error estimate of generalized barycentric interpolations. Similarly, the $H^k$ norm, $k>1$, estimate needs upper bounds of high-order derivatives, which are not available in the literature. In this paper, we derive such upper bounds for the Wachspress generalized barycentric coordinates on simple convex $d$-dimensional polytopes, $d\ge 1$. The result can be used to prove optimal convergence for Wachspress-based polytopal finite element approximation of, for example, fourth-order elliptic equations. Another contribution of this paper is to compare various shape-regularity conditions for simple convex polytopes, and to clarify their relations using knowledge from convex geometry.

In this article, we develop an asymptotic method for constructing confidence regions for the set of all linear subspaces arising from PCA, from which we derive hypothesis tests on this set. Our method is based on the geometry of Riemannian manifolds with which some sets of linear subspaces are endowed.

A posteriori reduced-order models (ROM), e.g. based on proper orthogonal decomposition (POD), are essential to affordably tackle realistic parametric problems. They rely on a trustful training set, that is a family of full-order solutions (snapshots) representative of all possible outcomes of the parametric problem. Having such a rich collection of snapshots is not, in many cases, computationally viable. A strategy for data augmentation, designed for parametric laminar incompressible flows, is proposed to enrich poorly populated training sets. The goal is to include in the new, artificial snapshots emerging features, not present in the original basis, that do enhance the quality of the reduced basis (RB) constructed using POD dimensionality reduction. The methodologies devised are based on exploiting basic physical principles, such as mass and momentum conservation, to construct physically-relevant, artificial snapshots at a fraction of the cost of additional full-order solutions. Interestingly, the numerical results show that the ideas exploiting only mass conservation (i.e., incompressibility) are not producing significant added value with respect to the standard linear combinations of snapshots. Conversely, accounting for the linearized momentum balance via the Oseen equation does improve the quality of the resulting approximation and therefore is an effective data augmentation strategy in the framework of viscous incompressible laminar flows. Numerical experiments of parametric flow problems, in two and three dimensions, at low and moderate values of the Reynolds number are presented to showcase the superior performance of the data-enriched POD-RB with respect to the standard ROM in terms of both accuracy and efficiency.

北京阿比特科技有限公司