亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Many models require integrals of high-dimensional functions: for instance, to obtain marginal likelihoods. Such integrals may be intractable, or too expensive to compute numerically. Instead, we can use the Laplace approximation (LA). The LA is exact if the function is proportional to a normal density; its effectiveness therefore depends on the function's true shape. Here, we propose the use of the probabilistic numerical framework to develop a diagnostic for the LA and its underlying shape assumptions, modelling the function and its integral as a Gaussian process and devising a "test" by conditioning on a finite number of function values. The test is decidedly non-asymptotic and is not intended as a full substitute for numerical integration - rather, it is simply intended to test the feasibility of the assumptions underpinning the LA with as minimal computation. We discuss approaches to optimize and design the test, apply it to known sample functions, and highlight the challenges of high dimensions.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

Motivated by the need for the rigorous analysis of the numerical stability of variational least-squares kernel-based methods for solving second-order elliptic partial differential equations, we provide previously lacking stability inequalities. This fills a significant theoretical gap in the previous work [Comput. Math. Appl. 103 (2021) 1-11], which provided error estimates based on a conjecture on the stability. With the stability estimate now rigorously proven, we complete the theoretical foundations and compare the convergence behavior to the proven rates. Furthermore, we establish another stability inequality involving weighted-discrete norms, and provide a theoretical proof demonstrating that the exact quadrature weights are not necessary for the weighted least-squares kernel-based collocation method to converge. Our novel theoretical insights are validated by numerical examples, which showcase the relative efficiency and accuracy of these methods on data sets with large mesh ratios. The results confirm our theoretical predictions regarding the performance of variational least-squares kernel-based method, least-squares kernel-based collocation method, and our new weighted least-squares kernel-based collocation method. Most importantly, our results demonstrate that all methods converge at the same rate, validating the convergence theory of weighted least-squares in our proven theories.

Detection of abrupt spatial changes in physical properties representing unique geometric features such as buried objects, cavities, and fractures is an important problem in geophysics and many engineering disciplines. In this context, simultaneous spatial field and geometry estimation methods that explicitly parameterize the background spatial field and the geometry of the embedded anomalies are of great interest. This paper introduces an advanced inversion procedure for simultaneous estimation using the domain independence property of the Karhunen-Lo\`eve (K-L) expansion. Previous methods pursuing this strategy face significant computational challenges. The associated integral eigenvalue problem (IEVP) needs to be solved repeatedly on evolving domains, and the shape derivatives in gradient-based algorithms require costly computations of the Moore-Penrose inverse. Leveraging the domain independence property of the K-L expansion, the proposed method avoids both of these bottlenecks, and the IEVP is solved only once on a fixed bounding domain. Comparative studies demonstrate that our approach yields two orders of magnitude improvement in K-L expansion gradient computation time. Inversion studies on one-dimensional and two-dimensional seepage flow problems highlight the benefits of incorporating geometry parameters along with spatial field parameters. The proposed method captures abrupt changes in hydraulic conductivity with a lower number of parameters and provides accurate estimates of boundary and spatial-field uncertainties, outperforming spatial-field-only estimation methods.

Most of the scientific literature on causal modeling considers the structural framework of Pearl and the potential-outcome framework of Rubin to be formally equivalent, and therefore interchangeably uses do-interventions and the potential-outcome subscript notation to write counterfactual outcomes. In this paper, we agnostically superimpose the two causal models to specify under which mathematical conditions structural counterfactual outcomes and potential outcomes need to, do not need to, can, or cannot be equal (almost surely or law). Our comparison reminds that a structural causal model and a Rubin causal model compatible with the same observations do not have to coincide, and highlights real-world problems where they even cannot correspond. Then, we examine common claims and practices from the causal-inference literature in the light of these results. In doing so, we aim at clarifying the relationship between the two causal frameworks, and the interpretation of their respective counterfactuals.

A new variant of the GMRES method is presented for solving linear systems with the same matrix and subsequently obtained multiple right-hand sides. The new method keeps such properties of the classical GMRES algorithm as follows. Both bases of the search space and its image are maintained orthonormal that increases the robustness of the method. Moreover there is no need to store both bases since they are effectively represented within a common basis. Along with it our method is theoretically equivalent to the GCR method extended for a case of multiple right-hand sides but is more numerically robust and requires less memory. The main result of the paper is a mechanism of adding an arbitrary direction vector to the search space that can be easily adopted for flexible GMRES or GMRES with deflated restarting.

Sequential positivity is often a necessary assumption for drawing causal inferences, such as through marginal structural modeling. Unfortunately, verification of this assumption can be challenging because it usually relies on multiple parametric propensity score models, unlikely all correctly specified. Therefore, we propose a new algorithm, called "sequential Positivity Regression Tree" (sPoRT), to check this assumption with greater ease under either static or dynamic treatment strategies. This algorithm also identifies the subgroups found to be violating this assumption, allowing for insights about the nature of the violations and potential solutions. We first present different versions of sPoRT based on either stratifying or pooling over time. Finally, we illustrate its use in a real-life application of HIV-positive children in Southern Africa with and without pooling over time. An R notebook showing how to use sPoRT is available at github.com/ArthurChatton/sPoRT-notebook.

The multi-modal perception methods are thriving in the autonomous driving field due to their better usage of complementary data from different sensors. Such methods depend on calibration and synchronization between sensors to get accurate environmental information. There have already been studies about space-alignment robustness in autonomous driving object detection process, however, the research for time-alignment is relatively few. As in reality experiments, LiDAR point clouds are more challenging for real-time data transfer, our study used historical frames of LiDAR to better align features when the LiDAR data lags exist. We designed a Timealign module to predict and combine LiDAR features with observation to tackle such time misalignment based on SOTA GraphBEV framework.

Finite element discretization of Stokes problems can result in singular, inconsistent saddle point linear algebraic systems. This inconsistency can cause many iterative methods to fail to converge. In this work, we consider the lowest-order weak Galerkin finite element method to discretize Stokes flow problems and study a consistency enforcement by modifying the right-hand side of the resulting linear system. It is shown that the modification of the scheme does not affect the optimal-order convergence of the numerical solution. Moreover, inexact block diagonal and triangular Schur complement preconditioners and the minimal residual method (MINRES) and the generalized minimal residual method (GMRES) are studied for the iterative solution of the modified scheme. Bounds for the eigenvalues and the residual of MINRES/GMRES are established. Those bounds show that the convergence of MINRES and GMRES is independent of the viscosity parameter and mesh size. The convergence of the modified scheme and effectiveness of the preconditioners are verified using numerical examples in two and three dimensions.

We examine the last-iterate convergence rate of Bregman proximal methods - from mirror descent to mirror-prox and its optimistic variants - as a function of the local geometry induced by the prox-mapping defining the method. For generality, we focus on local solutions of constrained, non-monotone variational inequalities, and we show that the convergence rate of a given method depends sharply on its associated Legendre exponent, a notion that measures the growth rate of the underlying Bregman function (Euclidean, entropic, or other) near a solution. In particular, we show that boundary solutions exhibit a stark separation of regimes between methods with a zero and non-zero Legendre exponent: the former converge at a linear rate, while the latter converge, in general, sublinearly. This dichotomy becomes even more pronounced in linearly constrained problems where methods with entropic regularization achieve a linear convergence rate along sharp directions, compared to convergence in a finite number of steps under Euclidean regularization.

The maximal regularity property of discontinuous Galerkin methods for linear parabolic equations is used together with variational techniques to establish a priori and a posteriori error estimates of optimal order under optimal regularity assumptions. The analysis is set in the maximal regularity framework of UMD Banach spaces. Similar results were proved in an earlier work, based on the consistency analysis of Radau IIA methods. The present error analysis, which is based on variational techniques, is of independent interest, but the main motivation is that it extends to nonlinear parabolic equations; in contrast to the earlier work. Both autonomous and nonautonomous linear equations are considered.

Ensemble forecasts often outperform forecasts from individual standalone models, and have been used to support decision-making and policy planning in various fields. As collaborative forecasting efforts to create effective ensembles grow, so does interest in understanding individual models' relative importance in the ensemble. To this end, we propose two practical methods that measure the difference between ensemble performance when a given model is or is not included in the ensemble: a leave-one-model-out algorithm and a leave-all-subsets-of-models-out algorithm, which is based on the Shapley value. We explore the relationship between these metrics, forecast accuracy, and the similarity of errors, both analytically and through simulations. We illustrate this measure of the value a component model adds to an ensemble in the presence of other models using US COVID-19 death forecasts. This study offers valuable insight into individual models' unique features within an ensemble, which standard accuracy metrics alone cannot reveal.

北京阿比特科技有限公司