亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present the new Orthogonal Polynomials Approximation Algorithm (OPAA), a parallelizable algorithm that solves two problems from a functional analytic approach: first, it finds a smooth functional estimate of a density function, whether it is normalized or not; second, the algorithm provides an estimate of the normalizing weight. In the context of Bayesian inference, OPAA provides an estimate of the posterior function as well as the normalizing weight, which is also known as the evidence. A core component of OPAA is a special transform of the square root of the joint distribution into a special functional space of our construct. Through this transform, the evidence is equated with the $L^2$ norm of the transformed function, squared. Hence, the evidence can be estimated by the sum of squares of the transform coefficients. The computations can be parallelized and completed in one pass. To compute the transform coefficients, OPAA proposes a new computational scheme leveraging Gauss--Hermite quadrature in higher dimensions. Not only does it avoid the potential high variance problem associated with random sampling methods, it also enables one to speed up the computation by parallelization, and significantly reduces the complexity by a vector decomposition.

相關內容

We present iDARR, a scalable iterative Data-Adaptive RKHS Regularization method, for solving ill-posed linear inverse problems. The method searches for solutions in subspaces where the true solution can be identified, with the data-adaptive RKHS penalizing the spaces of small singular values. At the core of the method is a new generalized Golub-Kahan bidiagonalization procedure that recursively constructs orthonormal bases for a sequence of RKHS-restricted Krylov subspaces. The method is scalable with a complexity of $O(kmn)$ for $m$-by-$n$ matrices with $k$ denoting the iteration numbers. Numerical tests on the Fredholm integral equation and 2D image deblurring show that it outperforms the widely used $L^2$ and $l^2$ norms, producing stable accurate solutions consistently converging when the noise level decays.

We present a deterministic algorithm for the efficient evaluation of imaginary time diagrams based on the recently introduced discrete Lehmann representation (DLR) of imaginary time Green's functions. In addition to the efficient discretization of diagrammatic integrals afforded by its approximation properties, the DLR basis is separable in imaginary time, allowing us to decompose diagrams into linear combinations of nested sequences of one-dimensional products and convolutions. Focusing on the strong coupling bold-line expansion of generalized Anderson impurity models, we show that our strategy reduces the computational complexity of evaluating an $M$th-order diagram at inverse temperature $\beta$ and spectral width $\omega_{\max}$ from $\mathcal{O}((\beta \omega_{\max})^{2M-1})$ for a direct quadrature to $\mathcal{O}(M (\log (\beta \omega_{\max}))^{M+1})$, with controllable high-order accuracy. We benchmark our algorithm using third-order expansions for multi-band impurity problems with off-diagonal hybridization and spin-orbit coupling, presenting comparisons with exact diagonalization and quantum Monte Carlo approaches. In particular, we perform a self-consistent dynamical mean-field theory calculation for a three-band Hubbard model with strong spin-orbit coupling representing a minimal model of Ca$_2$RuO$_4$, demonstrating the promise of the method for modeling realistic strongly correlated multi-band materials. For both strong and weak coupling expansions of low and intermediate order, in which diagrams can be enumerated, our method provides an efficient, straightforward, and robust black-box evaluation procedure. In this sense, it fills a gap between diagrammatic approximations of the lowest order, which are simple and inexpensive but inaccurate, and those based on Monte Carlo sampling of high-order diagrams.

We survey recent contributions to finite element exterior calculus on manifolds and surfaces within a comprehensive formalism for the error analysis of vector-valued partial differential equations on manifolds. Our primary focus is on uniformly bounded commuting projections on manifolds: these projections map from Sobolev de Rham complexes onto finite element de Rham complexes, commute with the differential operators, and satisfy uniform bounds in Lebesgue norms. They enable the Galerkin theory of Hilbert complexes for a large range of intrinsic finite element methods on manifolds. However, these intrinsic finite element methods are generally not computable and thus primarily of theoretical interest. This leads to our second point: estimating the geometric variational crime incurred by transitioning to computable approximate problems. Lastly, our third point addresses how to estimate the approximation error of the intrinsic finite element method in terms of the mesh size. If the solution is not continuous, then such an estimate is achieved via modified Cl\'ement or Scott-Zhang interpolants that facilitate a broken Bramble--Hilbert lemma.

Laguerre spectral approximations play an important role in the development of efficient algorithms for problems in unbounded domains. In this paper, we present a comprehensive convergence rate analysis of Laguerre spectral approximations for analytic functions. By exploiting contour integral techniques from complex analysis, we prove that Laguerre projection and interpolation methods of degree $n$ converge at the root-exponential rate $O(\exp(-2\rho\sqrt{n}))$ with $\rho>0$ when the underlying function is analytic inside and on a parabola with focus at the origin and vertex at $z=-\rho^2$. As far as we know, this is the first rigorous proof of root-exponential convergence of Laguerre approximations for analytic functions. Several important applications of our analysis are also discussed, including Laguerre spectral differentiations, Gauss-Laguerre quadrature rules, the scaling factor and the Weeks method for the inversion of Laplace transform, and some sharp convergence rate estimates are derived. Numerical experiments are presented to verify the theoretical results.

A surprising 'converse to the polynomial method' of Aaronson et al. (CCC'16) shows that any bounded quadratic polynomial can be computed exactly in expectation by a 1-query algorithm up to a universal multiplicative factor related to the famous Grothendieck constant. Here we show that such a result does not generalize to quartic polynomials and 2-query algorithms, even when we allow for additive approximations. We also show that the additive approximation implied by their result is tight for bounded bilinear forms, which gives a new characterization of the Grothendieck constant in terms of 1-query quantum algorithms. Along the way we provide reformulations of the completely bounded norm of a form, and its dual norm.

For two real symmetric matrices, their eigenvalue configuration is the arrangement of their eigenvalues on the real line. In this paper, we provide quantifier-free necessary and sufficient conditions for two symmetric matrices to realize a given eigenvalue configuration. The basic idea is to generate a set of polynomials in the entries of the two matrices whose roots can be counted to uniquely determine the eigenvalue configuration. This result can be seen as ageneralization of Descartes' rule of signs to the case of two real univariate polynomials.

The distribution for the minimum of Brownian motion or the Cauchy process is well-known using the reflection principle. Here we consider the problem of finding the sample-by-sample minimum, which we call the online minimum search. We consider the possibility of the golden search method, but we show quantitatively that the bisection method is more efficient. In the bisection method there is a hierarchical parameter, which tunes the depth to which each sub-search is conducted, somewhat similarly to how a depth-first search works to generate a topological ordering on nodes. Finally, we consider the possibility of using harmonic measure, which is a novel idea that has so far been unexplored.

For a matrix $A$ which satisfies Crouzeix's conjecture, we construct several classes of matrices from $A$ for which the conjecture will also hold. We discover a new link between cyclicity and Crouzeix's conjecture, which shows that Crouzeix's Conjecture holds in full generality if and only if it holds for the differentiation operator on a class of analytic functions. We pose several open questions, which if proved, will prove Crouzeix's conjecture. We also begin an investigation into Crouzeix's conjecture for symmetric matrices and in the case of $3 \times 3$ matrices, we show Crouzeix's conjecture holds for symmetric matrices if and only if it holds for analytic truncated Toeplitz operators.

The paper introduces a new meshfree pseudospectral method based on Gaussian radial basis functions (RBFs) collocation to solve fractional Poisson equations. Hypergeometric functions are used to represent the fractional Laplacian of Gaussian RBFs, enabling an efficient computation of stiffness matrix entries. Unlike existing RBF-based methods, our approach ensures a Toeplitz structure in the stiffness matrix with equally spaced RBF centers, enabling efficient matrix-vector multiplications using fast Fourier transforms. We conduct a comprehensive study on the shape parameter selection, addressing challenges related to ill-conditioning and numerical stability. The main contribution of our work includes rigorous stability analysis and error estimates of the Gaussian RBF collocation method, representing a first attempt at the rigorous analysis of RBF-based methods for fractional PDEs to the best of our knowledge. We conduct numerical experiments to validate our analysis and provide practical insights for implementation.

When and why can a neural network be successfully trained? This article provides an overview of optimization algorithms and theory for training neural networks. First, we discuss the issue of gradient explosion/vanishing and the more general issue of undesirable spectrum, and then discuss practical solutions including careful initialization and normalization methods. Second, we review generic optimization methods used in training neural networks, such as SGD, adaptive gradient methods and distributed methods, and theoretical results for these algorithms. Third, we review existing research on the global issues of neural network training, including results on bad local minima, mode connectivity, lottery ticket hypothesis and infinite-width analysis.

北京阿比特科技有限公司