亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Plug-and-play (PnP) prior is a well-known class of methods for solving imaging inverse problems by computing fixed-points of operators combining physical measurement models and learned image denoisers. While PnP methods have been extensively used for image recovery with known measurement operators, there is little work on PnP for solving blind inverse problems. We address this gap by presenting a new block-coordinate PnP (BC-PnP) method that efficiently solves this joint estimation problem by introducing learned denoisers as priors on both the unknown image and the unknown measurement operator. We present a new convergence theory for BC-PnP compatible with blind inverse problems by considering nonconvex data-fidelity terms and expansive denoisers. Our theory analyzes the convergence of BC-PnP to a stationary point of an implicit function associated with an approximate minimum mean-squared error (MMSE) denoiser. We numerically validate our method on two blind inverse problems: automatic coil sensitivity estimation in magnetic resonance imaging (MRI) and blind image deblurring. Our results show that BC-PnP provides an efficient and principled framework for using denoisers as PnP priors for jointly estimating measurement operators and images.

相關內容

The theory of mixed finite element methods for solving different types of elliptic partial differential equations in saddle-point formulation is well established since many decades. However, this topic was mostly studied for variational formulations defined upon the same finite-element product spaces of both shape- and test-pairs of primal variable-multiplier. Whenever these two product spaces are different the saddle point problem is asymmetric. It turns out that the conditions to be satisfied by the finite elements product spaces stipulated in the few works on this case may be of limited use in practice. The purpose of this paper is to provide an in-depth analysis of the well-posedness and the uniform stability of asymmetric approximate saddle point problems, based on the theory of continuous linear operators on Hilbert spaces. Our approach leads to necessary and sufficient conditions for such properties to hold, expressed in a readily exploitable form with fine constants. In particular standard interpolation theory suffices to estimate the error of a conforming method.

In this paper we demonstrate how sub-Riemannian geometry can be used for manifold learning and surface reconstruction by combining local linear approximations of a point cloud to obtain lower dimensional bundles. Local approximations obtained by local PCAs are collected into a rank $k$ tangent subbundle on $\mathbb{R}^d$, $k<d$, which we call a principal subbundle. This determines a sub-Riemannian metric on $\mathbb{R}^d$. We show that sub-Riemannian geodesics with respect to this metric can successfully be applied to a number of important problems, such as: explicit construction of an approximating submanifold $M$, construction of a representation of the point-cloud in $\mathbb{R}^k$, and computation of distances between observations, taking the learned geometry into account. The reconstruction is guaranteed to equal the true submanifold in the limit case where tangent spaces are estimated exactly. Via simulations, we show that the framework is robust when applied to noisy data. Furthermore, the framework generalizes to observations on an a priori known Riemannian manifold.

We consider the problem of mixed sparse linear regression with two components, where two real $k$-sparse signals $\beta_1, \beta_2$ are to be recovered from $n$ unlabelled noisy linear measurements. The sparsity is allowed to be sublinear in the dimension, and additive noise is assumed to be independent Gaussian with variance $\sigma^2$. Prior work has shown that the problem suffers from a $\frac{k}{SNR^2}$-to-$\frac{k^2}{SNR^2}$ statistical-to-computational gap, resembling other computationally challenging high-dimensional inference problems such as Sparse PCA and Robust Sparse Mean Estimation; here $SNR$ is the signal-to-noise ratio. We establish the existence of a more extensive computational barrier for this problem through the method of low-degree polynomials, but show that the problem is computationally hard only in a very narrow symmetric parameter regime. We identify a smooth information-computation tradeoff between the sample complexity $n$ and runtime for any randomized algorithm in this hard regime. Via a simple reduction, this provides novel rigorous evidence for the existence of a computational barrier to solving exact support recovery in sparse phase retrieval with sample complexity $n = \tilde{o}(k^2)$. Our second contribution is to analyze a simple thresholding algorithm which, outside of the narrow regime where the problem is hard, solves the associated mixed regression detection problem in $O(np)$ time with square-root the number of samples and matches the sample complexity required for (non-mixed) sparse linear regression; this allows the recovery problem to be subsequently solved by state-of-the-art techniques from the dense case. As a special case of our results, we show that this simple algorithm is order-optimal among a large family of algorithms in solving exact signed support recovery in sparse linear regression.

Many stochastic processes in the physical and biological sciences can be modelled using Brownian dynamics with multiplicative noise. However, numerical integrators for these processes can lose accuracy or even fail to converge when the diffusion term is configuration-dependent. One remedy is to construct a transform to a constant-diffusion process and sample the transformed process instead. In this work, we explain how coordinate-based and time-rescaling-based transforms can be used either individually or in combination to map a general class of variable-diffusion Brownian motion processes into constant-diffusion ones. The transforms are invertible, thus allowing recovery of the original dynamics. We motivate our methodology using examples in one dimension before then considering multivariate diffusion processes. We illustrate the benefits of the transforms through numerical simulations, demonstrating how the right combination of integrator and transform can improve computational efficiency and the order of convergence to the invariant distribution. Notably, the transforms that we derive are applicable to a class of multibody, anisotropic Stokes-Einstein diffusion that has applications in biophysical modelling.

Nurmuhammad et al. developed the Sinc-Nystr\"{o}m methods for initial value problems in which the solutions exhibit exponential decay end behavior. In these methods, the Single-Exponential (SE) transformation or the Double-Exponential (DE) transformation is combined with the Sinc approximation. Hara and Okayama improved on these transformations to attain a better convergence rate, which was later supported by theoretical error analyses. However, these methods have a computational drawback owing to the inclusion of a special function in the basis functions. To address this issue, Okayama and Hara proposed Sinc-collocation methods, which do not include any special function in the basis functions. This study conducts error analyses of these methods.

Interpreting objects with basic geometric primitives has long been studied in computer vision. Among geometric primitives, superquadrics are well known for their ability to represent a wide range of shapes with few parameters. However, as the first and foremost step, recovering superquadrics accurately and robustly from 3D data still remains challenging. The existing methods are subject to local optima and sensitive to noise and outliers in real-world scenarios, resulting in frequent failure in capturing geometric shapes. In this paper, we propose the first probabilistic method to recover superquadrics from point clouds. Our method builds a Gaussian-uniform mixture model (GUM) on the parametric surface of a superquadric, which explicitly models the generation of outliers and noise. The superquadric recovery is formulated as a Maximum Likelihood Estimation (MLE) problem. We propose an algorithm, Expectation, Maximization, and Switching (EMS), to solve this problem, where: (1) outliers are predicted from the posterior perspective; (2) the superquadric parameter is optimized by the trust-region reflective algorithm; and (3) local optima are avoided by globally searching and switching among parameters encoding similar superquadrics. We show that our method can be extended to the multi-superquadrics recovery for complex objects. The proposed method outperforms the state-of-the-art in terms of accuracy, efficiency, and robustness on both synthetic and real-world datasets. The code is at //github.com/bmlklwx/EMS-superquadric_fitting.git.

Observational studies are frequently used to estimate the effect of an exposure or treatment on an outcome. To obtain an unbiased estimate of the treatment effect, it is crucial to measure the exposure accurately. A common type of exposure misclassification is recall bias, which occurs in retrospective cohort studies when study subjects may inaccurately recall their past exposure. Specifically, differential recall bias can be problematic when examining the effect of a self-reported binary exposure since the magnitude of recall bias can differ between groups. In this paper, we provide the following contributions: 1) we derive bounds for the average treatment effect (ATE) in the presence of recall bias; 2) we develop several estimation approaches under different identification strategies; 3) we conduct simulation studies to evaluate their performance under several scenarios of model misspecification; 4) we propose a sensitivity analysis method that can examine the robustness of our results with respect to different assumptions; and 5) we apply the proposed framework to an observational study, estimating the effect of childhood physical abuse on adulthood mental health.

In this article, we present an overview of different a posteriori error analysis and postprocessing methods proposed in the context of nonlinear eigenvalue problems, e.g. arising inelectronic structure calculations for the calculation of the ground state and compare them. Weprovide two equivalent error reconstructions based either on a second-order Taylor expansionof the minimized energy, or a first-order expansion of the nonlinear eigenvalue equation. Wethen show how several a posteriori error estimations as well as post-processing methods can beformulated as specific applications of the derived reconstructed errors, and we compare theirrange of applicability as well as numerical cost and precision.

In this paper, we propose a new formulation and a suitable finite element method for the steady coupling of viscous flow in deformable porous media using divergence-conforming filtration fluxes. The proposed method is based on the use of parameter-weighted spaces, which allows for a more accurate and robust analysis of the continuous and discrete problems. Furthermore, we conduct a solvability analysis of the proposed method and derive optimal error estimates in appropriate norms. These error estimates are shown to be robust in the case of large Lam\'e parameters and small permeability and storativity coefficients. To illustrate the effectiveness of the proposed method, we provide a few representative numerical examples, including convergence verification, poroelastic channel flow simulation, and test the robustness of block-diagonal preconditioners with respect to model parameters.

Perception systems operate as a subcomponent of the general autonomy stack, and perception system designers often need to optimize performance characteristics while maintaining safety with respect to the overall closed-loop system. For this reason, it is useful to distill high-level safety requirements into component-level requirements on the perception system. In this work, we focus on efficiently determining sets of safe perception system performance characteristics given a black-box simulator of the fully-integrated, closed-loop system. We combine the advantages of common black-box estimation techniques such as Gaussian processes and threshold bandits to develop a new estimation method, which we call smoothing bandits. We demonstrate our method on a vision-based aircraft collision avoidance problem and show improvements in terms of both accuracy and efficiency over the Gaussian process and threshold bandit baselines.

北京阿比特科技有限公司