亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Probabilistic variants of Model Order Reduction (MOR) methods have recently emerged for improving stability and computational performance of classical approaches. In this paper, we propose a probabilistic Reduced Basis Method (RBM) for the approximation of a family of parameter-dependent functions. It relies on a probabilistic greedy algorithm with an error indicator that can be written as an expectation of some parameter-dependent random variable. Practical algorithms relying on Monte Carlo estimates of this error indicator are discussed. In particular, when using Probably Approximately Correct (PAC) bandit algorithm, the resulting procedure is proven to be a weak greedy algorithm with high probability. Intended applications concern the approximation of a parameter-dependent family of functions for which we only have access to (noisy) pointwise evaluations. As a particular application, we consider the approximation of solution manifolds of linear parameter-dependent partial differential equations with a probabilistic interpretation through the Feynman-Kac formula.

相關內容

 本話題關于日常用語「概率」,用于討論生活中的運氣、機會,及賭博、彩票、游戲中的「技巧」。關于抽象數學概念「概率」的討論,請轉 話題。

Stability and optimal convergence analysis of a non-uniform implicit-explicit L1 finite element method (IMEX-L1-FEM) is studied for a class of time-fractional linear partial differential/integro-differential equations with non-self-adjoint elliptic part having (space-time) variable coefficients. The proposed scheme is based on a combination of an IMEX-L1 method on graded mesh in the temporal direction and a finite element method in the spatial direction. With the help of a discrete fractional Gr\"{o}nwall inequality, optimal error estimates in $L^2$- and $H^1$-norms are derived for the problem with initial data $u_0 \in H_0^1(\Omega)\cap H^2(\Omega)$. Under higher regularity condition $u_0 \in \dot{H}^3(\Omega)$, a super convergence result is established and as a consequence, $L^\infty$ error estimate is obtained for 2D problems. Numerical experiments are presented to validate our theoretical findings.

Data sets obtained from linking multiple files are frequently affected by mismatch error, as a result of non-unique or noisy identifiers used during record linkage. Accounting for such mismatch error in downstream analysis performed on the linked file is critical to ensure valid statistical inference. In this paper, we present a general framework to enable valid post-linkage inference in the challenging secondary analysis setting in which only the linked file is given. The proposed framework covers a wide selection of statistical models and can flexibly incorporate additional information about the underlying record linkage process. Specifically, we propose a mixture model for pairs of linked records whose two components reflect distributions conditional on match status, i.e., correct match or mismatch. Regarding inference, we develop a method based on composite likelihood and the EM algorithm as well as an extension towards a fully Bayesian approach. Extensive simulations and several case studies involving contemporary record linkage applications corroborate the effectiveness of our framework.

The Classification Tree (CT) is one of the most common models in interpretable machine learning. Although such models are usually built with greedy strategies, in recent years, thanks to remarkable advances in Mixer-Integer Programming (MIP) solvers, several exact formulations of the learning problem have been developed. In this paper, we argue that some of the most relevant ones among these training models can be encapsulated within a general framework, whose instances are shaped by the specification of loss functions and regularizers. Next, we introduce a novel realization of this framework: specifically, we consider the logistic loss, handled in the MIP setting by a linear piece-wise approximation, and couple it with $\ell_1$-regularization terms. The resulting Optimal Logistic Tree model numerically proves to be able to induce trees with enhanced interpretability features and competitive generalization capabilities, compared to the state-of-the-art MIP-based approaches.

Models with random effects, such as generalised linear mixed models (GLMMs), are often used for analysing clustered data. Parameter inference with these models is difficult because of the presence of cluster-specific random effects, which must be integrated out when evaluating the likelihood function. Here, we propose a sequential variational Bayes algorithm, called Recursive Variational Gaussian Approximation for Latent variable models (R-VGAL), for estimating parameters in GLMMs. The R-VGAL algorithm operates on the data sequentially, requires only a single pass through the data, and can provide parameter updates as new data are collected without the need of re-processing the previous data. At each update, the R-VGAL algorithm requires the gradient and Hessian of a "partial" log-likelihood function evaluated at the new observation, which are generally not available in closed form for GLMMs. To circumvent this issue, we propose using an importance-sampling-based approach for estimating the gradient and Hessian via Fisher's and Louis' identities. We find that R-VGAL can be unstable when traversing the first few data points, but that this issue can be mitigated by using a variant of variational tempering in the initial steps of the algorithm. Through illustrations on both simulated and real datasets, we show that R-VGAL provides good approximations to the exact posterior distributions, that it can be made robust through tempering, and that it is computationally efficient.

Diffusion model-based inverse problem solvers have shown impressive performance, but are limited in speed, mostly as they require reverse diffusion sampling starting from noise. Several recent works have tried to alleviate this problem by building a diffusion process, directly bridging the clean and the corrupted for specific inverse problems. In this paper, we first unify these existing works under the name Direct Diffusion Bridges (DDB), showing that while motivated by different theories, the resulting algorithms only differ in the choice of parameters. Then, we highlight a critical limitation of the current DDB framework, namely that it does not ensure data consistency. To address this problem, we propose a modified inference procedure that imposes data consistency without the need for fine-tuning. We term the resulting method data Consistent DDB (CDDB), which outperforms its inconsistent counterpart in terms of both perception and distortion metrics, thereby effectively pushing the Pareto-frontier toward the optimum. Our proposed method achieves state-of-the-art results on both evaluation criteria, showcasing its superiority over existing methods.

In recent years, large convolutional neural networks have been widely used as tools for image deblurring, because of their ability in restoring images very precisely. It is well known that image deblurring is mathematically modeled as an ill-posed inverse problem and its solution is difficult to approximate when noise affects the data. Really, one limitation of neural networks for deblurring is their sensitivity to noise and other perturbations, which can lead to instability and produce poor reconstructions. In addition, networks do not necessarily take into account the numerical formulation of the underlying imaging problem, when trained end-to-end. In this paper, we propose some strategies to improve stability without losing to much accuracy to deblur images with deep-learning based methods. First, we suggest a very small neural architecture, which reduces the execution time for training, satisfying a green AI need, and does not extremely amplify noise in the computed image. Second, we introduce a unified framework where a pre-processing step balances the lack of stability of the following, neural network-based, step. Two different pre-processors are presented: the former implements a strong parameter-free denoiser, and the latter is a variational model-based regularized formulation of the latent imaging problem. This framework is also formally characterized by mathematical analysis. Numerical experiments are performed to verify the accuracy and stability of the proposed approaches for image deblurring when unknown or not-quantified noise is present; the results confirm that they improve the network stability with respect to noise. In particular, the model-based framework represents the most reliable trade-off between visual precision and robustness.

In this paper, we discuss reduced order modelling approaches to bifurcating systems arising from continuum mechanics benchmarks. The investigation of the beam's deflection is a relevant topic of investigation with fundamental implications on their design for structural analysis and health. When the beams are exposed to external forces, their equilibrium state can undergo to a sudden variation. This happens when a compression, acting along the axial boundaries, exceeds a certain critical value. Linear elasticity models are not complex enough to capture the so-called beam's buckling, and nonlinear constitutive relations, as the hyperelastic laws, are required to investigate this behavior, whose mathematical counterpart is represented by bifurcating phenomena. The numerical analysis of the bifurcating modes and the post-buckling behavior, is usually unaffordable by means of standard high-fidelity techniques such (as the Finite Element method) and the efficiency of Reduced Order Models (ROMs), e.g.\ based on Proper Orthogonal Decomposition (POD), are necessary to obtain consistent speed-up in the reconstruction of the bifurcation diagram. The aim of this work is to provide insights regarding the application of POD-based ROMs for buckling phenomena occurring for 2-D and 3-D beams governed by different constitutive relations. The benchmarks will involve multi-parametric settings with geometrically parametrized domains, where the buckling's location depends on the material and geometrical properties induced by the parameter. Finally, we exploit the acquired notions from these toy problems, to simulate a real case scenario coming from the Norwegian petroleum industry.

The objective of this article is to introduce a novel technique for computing numerical solutions to the nonlinear inverse heat conduction problem. This involves solving nonlinear parabolic equations with Cauchy data provided on one side $\Gamma$ of the boundary of the computational domain $\Omega$. The key step of our proposed method is the truncation of the Fourier series of the solution to the governing equation. The truncation technique enables us to derive a system of 1D ordinary differential equations. Then, we employ the well-known Runge-Kutta method to solve this system, which aids in addressing the nonlinearity and the lack of data on $\partial \Omega \setmunus \Gamma$. This new approach is called the dimensional reduction method. By converting the high-dimensional problem into a 1D problem, we achieve exceptional computational speed. Numerical results are provided to support the effectiveness of our approach.

Bayesian methods are commonly applied to solve image analysis problems such as noise-reduction, feature enhancement and object detection. A primary limitation of these approaches is the computational complexity due to the interdependence of neighboring pixels which limits the ability to perform full posterior sampling through Markov chain Monte Carlo (MCMC). To alleviate this problem, we develop a new posterior sampling method that is based on modeling the prior and likelihood in the space of the Fourier transform of the image. One advantage of Fourier-based methods is that many spatially correlated processes in image space can be represented via independent processes over Fourier space. A recent approach known as Bayesian Image Analysis in Fourier Space (or BIFS), has introduced parameter functions to describe prior expectations about image properties in Fourier space. To date BIFS has relied on Maximum a Posteriori (MAP) estimation for generating posterior estimates; providing just a single point estimate. The work presented here develops a posterior sampling approach for BIFS that can explore the full posterior distribution while continuing to take advantage of the independence modeling over Fourier space. As a result computational efficiency is improved over that for conventional Bayesian image analysis and mixing concerns that commonly have to be dealt with in high dimensional Markov chain Monte Carlo sampling problems are avoided. Implementation results and details are provided using simulated data.

Given $n$ samples of a function $f\colon D\to\mathbb C$ in random points drawn with respect to a measure $\varrho_S$ we develop theoretical analysis of the $L_2(D, \varrho_T)$-approximation error. For a parituclar choice of $\varrho_S$ depending on $\varrho_T$, it is known that the weighted least squares method from finite dimensional function spaces $V_m$, $\dim(V_m) = m < \infty$ has the same error as the best approximation in $V_m$ up to a multiplicative constant when given exact samples with logarithmic oversampling. If the source measure $\varrho_S$ and the target measure $\varrho_T$ differ we are in the domain adaptation setting, a subfield of transfer learning. We model the resulting deterioration of the error in our bounds. Further, for noisy samples, our bounds describe the bias-variance trade off depending on the dimension $m$ of the approximation space $V_m$. All results hold with high probability. For demonstration, we consider functions defined on the $d$-dimensional cube given in unifom random samples. We analyze polynomials, the half-period cosine, and a bounded orthonormal basis of the non-periodic Sobolev space $H_{\mathrm{mix}}^2$. Overcoming numerical issues of this $H_{\text{mix}}^2$ basis, this gives a novel stable approximation method with quadratic error decay. Numerical experiments indicate the applicability of our results.

北京阿比特科技有限公司