亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Markov chain Monte Carlo (MCMC) simulations have been widely used to generate samples from the complex posterior distribution in Bayesian inferences. However, these simulations often require multiple computations of the forward model in the likelihood function for each drawn sample. This computational burden renders MCMC sampling impractical when the forward model is computationally expensive, such as in the case of partial differential equation models. In this paper, we propose a novel sampling approach called the geometric optics approximation method (GOAM) for Bayesian inverse problems, which entirely circumvents the need for MCMC simulations. Our method is rooted in the problem of reflector shape design, which focuses on constructing a reflecting surface that redirects rays from a source, with a predetermined density, towards a target domain while achieving a desired density distribution. The key idea is to consider the unnormalized Bayesian posterior as the density on the target domain within the optical system and define a geometric optics approximation measure with respect to posterior by a reflecting surface. Consequently, once such a reflecting surface is obtained, we can utilize it to draw an arbitrary number of independent and uncorrelated samples from the posterior measure for Bayesian inverse problems. In theory, we have shown that the geometric optics approximation measure is well-posed. The efficiency and robustness of our proposed sampler, employing the geometric optics approximation method, are demonstrated through several numerical examples provided in this paper.

相關內容

A Nehari manifold optimization method (NMOM) is introduced for finding 1-saddles, i.e., saddle points with the Morse index equal to one, of a generic nonlinear functional in Hilbert spaces. Actually, it is based on the variational characterization that 1-saddles of the generic functional are local minimizers of the same functional restricted on the associated Nehari manifold. The framework contains two important ingredients: one is the retraction mapping to make the iteration points always lie on the Nehari manifold; the other is the tangential search direction to decrease the generic functional with suitable step-size search rules. Particularly, the global convergence is rigorously established by virtue of some crucial analysis techniques (including a weak convergence method) overcoming difficulties in the infinite-dimensional setting. In practice, combining with an easy-to-implement Nehari retraction and the negative Riemannian gradient direction, the NMOM is successfully applied to compute the unstable ground-state solutions of a class of typical semilinear elliptic PDEs such as H\'enon equation and the stationary nonlinear Schr\"odinger equation. In particular, the symmetry-breaking phenomenon of the ground states of H\'enon equation is explored numerically in 1D and 2D with interesting numerical findings on the critical value of symmetry-breaking reported.

We propose and analyze an $H^2$-conforming Virtual Element Method (VEM) for the simplest linear elliptic PDEs in nondivergence form with Cordes coefficients. The VEM hinges on a hierarchical construction valid for any dimension $d \ge 2$. The analysis relies on the continuous Miranda-Talenti estimate for convex domains $\Omega$ and is rather elementary. We prove stability and error estimates in $H^2(\Omega)$, including the effect of quadrature, under minimal regularity of the data. Numerical experiments illustrate the interplay of coefficient regularity and convergence rates in $H^2(\Omega)$.

We present a spectral method for one-sided linear fractional integral equations on a closed interval that achieves exponentially fast convergence for a variety of equations, including ones with irrational order, multiple fractional orders, non-trivial variable coefficients, and initial-boundary conditions. The method uses an orthogonal basis that we refer to as Jacobi fractional polynomials, which are obtained from an appropriate change of variable in weighted classical Jacobi polynomials. New algorithms for building the matrices used to represent fractional integration operators are presented and compared. Even though these algorithms are unstable and require the use of high-precision computations, the spectral method nonetheless yields well-conditioned linear systems and is therefore stable and efficient. For time-fractional heat and wave equations, we show that our method (which is not sparse but uses an orthogonal basis) outperforms a sparse spectral method (which uses a basis that is not orthogonal) due to its superior stability.

Hybrid quantum-classical computing in the noisy intermediate-scale quantum (NISQ) era with variational algorithms can exhibit barren plateau issues, causing difficult convergence of gradient-based optimization techniques. In this paper, we discuss "post-variational strategies", which shift tunable parameters from the quantum computer to the classical computer, opting for ensemble strategies when optimizing quantum models. We discuss various strategies and design principles for constructing individual quantum circuits, where the resulting ensembles can be optimized with convex programming. Further, we discuss architectural designs of post-variational quantum neural networks and analyze the propagation of estimation errors throughout such neural networks. Finally, we show that empirically, post-variational quantum neural networks using our architectural designs can potentially provide better results than variational algorithms and performance comparable to that of two-layer neural networks.

Bayesian factor analysis is routinely used for dimensionality reduction in modeling of high-dimensional covariance matrices. Factor analytic decompositions express the covariance as a sum of a low rank and diagonal matrix. In practice, Gibbs sampling algorithms are typically used for posterior computation, alternating between updating the latent factors, loadings, and residual variances. In this article, we exploit a blessing of dimensionality to develop a provably accurate pseudo-posterior for the covariance matrix that bypasses the need for Gibbs or other variants of Markov chain Monte Carlo sampling. Our proposed Factor Analysis with BLEssing of dimensionality (FABLE) approach relies on a first-stage singular value decomposition (SVD) to estimate the latent factors, and then defines a jointly conjugate prior for the loadings and residual variances. The accuracy of the resulting pseudo-posterior for the covariance improves with increasing dimensionality. We show that FABLE has excellent performance in high-dimensional covariance matrix estimation, including producing well calibrated credible intervals, both theoretically and through simulation experiments. We also demonstrate the strength of our approach in terms of accurate inference and computational efficiency by applying it to a gene expression data set.

Learning approximations to smooth target functions of many variables from finite sets of pointwise samples is an important task in scientific computing and its many applications in computational science and engineering. Despite well over half a century of research on high-dimensional approximation, this remains a challenging problem. Yet, significant advances have been made in the last decade towards efficient methods for doing this, commencing with so-called sparse polynomial approximation methods and continuing most recently with methods based on Deep Neural Networks (DNNs). In tandem, there have been substantial advances in the relevant approximation theory and analysis of these techniques. In this work, we survey this recent progress. We describe the contemporary motivations for this problem, which stem from parametric models and computational uncertainty quantification; the relevant function classes, namely, classes of infinite-dimensional, Banach-valued, holomorphic functions; fundamental limits of learnability from finite data for these classes; and finally, sparse polynomial and DNN methods for efficiently learning such functions from finite data. For the latter, there is currently a significant gap between the approximation theory of DNNs and the practical performance of deep learning. Aiming to narrow this gap, we develop the topic of practical existence theory, which asserts the existence of dimension-independent DNN architectures and training strategies that achieve provably near-optimal generalization errors in terms of the amount of training data.

We propose, analyze, and test new robust iterative solvers for systems of linear algebraic equations arising from the space-time finite element discretization of reduced optimality systems defining the approximate solution of hyperbolic distributed, tracking-type optimal control problems with both the standard $L^2$ and the more general energy regularizations. In contrast to the usual time-stepping approach, we discretize the optimality system by space-time continuous piecewise-linear finite element basis functions which are defined on fully unstructured simplicial meshes. If we aim at the asymptotically best approximation of the given desired state $y_d$ by the computed finite element state $y_{\varrho h}$, then the optimal choice of the regularization parameter $\varrho$ is linked to the space-time finite element mesh-size $h$ by the relations $\varrho=h^4$ and $\varrho=h^2$ for the $L^2$ and the energy regularization, respectively. For this setting, we can construct robust (parallel) iterative solvers for the reduced finite element optimality systems. These results can be generalized to variable regularization parameters adapted to the local behavior of the mesh-size that can heavily change in the case of adaptive mesh refinements. The numerical results illustrate the theoretical findings firmly.

Bilevel optimization, with broad applications in machine learning, has an intricate hierarchical structure. Gradient-based methods have emerged as a common approach to large-scale bilevel problems. However, the computation of the hyper-gradient, which involves a Hessian inverse vector product, confines the efficiency and is regarded as a bottleneck. To circumvent the inverse, we construct a sequence of low-dimensional approximate Krylov subspaces with the aid of the Lanczos process. As a result, the constructed subspace is able to dynamically and incrementally approximate the Hessian inverse vector product with less effort and thus leads to a favorable estimate of the hyper-gradient. Moreover, we propose a~provable subspace-based framework for bilevel problems where one central step is to solve a small-size tridiagonal linear system. To the best of our knowledge, this is the first time that subspace techniques are incorporated into bilevel optimization. This successful trial not only enjoys $\mathcal{O}(\epsilon^{-1})$ convergence rate but also demonstrates efficiency in a synthetic problem and two deep learning tasks.

The continental plates of Earth are known to drift over a geophysical timescale, and their interactions have lead to some of the most spectacular geoformations of our planet while also causing natural disasters such as earthquakes and volcanic activity. Understanding the dynamics of interacting continental plates is thus significant. In this work, we present a fluid mechanical investigation of the plate motion, interaction, and dynamics. Through numerical experiments, we examine the coupling between a convective fluid and plates floating on top of it. With physical modeling, we show the coupling is both mechanical and thermal, leading to the thermal blanket effect: the floating plate is not only transported by the fluid flow beneath, it also prevents the heat from leaving the fluid, leading to a convective flow that further affects the plate motion. By adding several plates to such a coupled fluid-structure interaction, we also investigate how floating plates interact with each other and show that, under proper conditions, small plates can converge into a supercontinent.

An interesting case of the well-known Dataset Shift Problem is the classification of Electroencephalogram (EEG) signals in the context of Brain-Computer Interface (BCI). The non-stationarity of EEG signals can lead to poor generalisation performance in BCI classification systems used in different sessions, also from the same subject. In this paper, we start from the hypothesis that the Dataset Shift problem can be alleviated by exploiting suitable eXplainable Artificial Intelligence (XAI) methods to locate and transform the relevant characteristics of the input for the goal of classification. In particular, we focus on an experimental analysis of explanations produced by several XAI methods on an ML system trained on a typical EEG dataset for emotion recognition. Results show that many relevant components found by XAI methods are shared across the sessions and can be used to build a system able to generalise better. However, relevant components of the input signal also appear to be highly dependent on the input itself.

北京阿比特科技有限公司