亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In information theory, it is of recent interest to study variability of the uncertainty measures. In this regard, the concept of varentropy has been introduced and studied by several authors in recent past. In this communication, we study the weighted varentropy and weighted residual varentropy. Several theoretical results of these variability measures such as the effect under monotonic transformations and bounds are investigated. Importance of the weighted residual varentropy over the residual varentropy is presented. Further, we study weighted varentropy for coherent systems and weighted residual varentropy for proportional hazard rate models. A kernel-based non-parametric estimator for the weighted residual varentropy is also proposed. The estimation method is illustrated using simulated and two real data sets.

相關內容

The broad class of multivariate unified skew-normal (SUN) distributions has been recently shown to possess important conjugacy properties. When used as priors for the vector of parameters in general probit, tobit, and multinomial probit models, these distributions yield posteriors that still belong to the SUN family. Although such a core result has led to important advancements in Bayesian inference and computation, its applicability beyond likelihoods associated with fully-observed, discretized, or censored realizations from multivariate Gaussian models remains yet unexplored. This article covers such an important gap by proving that the wider family of multivariate unified skew-elliptical (SUE) distributions, which extends SUNs to more general perturbations of elliptical densities, guarantees conjugacy for broader classes of models, beyond those relying on fully-observed, discretized or censored Gaussians. Such a result leverages the closure under linear combinations, conditioning and marginalization of SUE to prove that this family is conjugate to the likelihood induced by general multivariate regression models for fully-observed, censored or dichotomized realizations from skew-elliptical distributions. This advancement enlarges the set of models that enable conjugate Bayesian inference to general formulations arising from elliptical and skew-elliptical families, including the multivariate Student's t and skew-t, among others.

The Retinex theory models the image as a product of illumination and reflection components, which has received extensive attention and is widely used in image enhancement, segmentation and color restoration. However, it has been rarely used in additive noise removal due to the inclusion of both multiplication and addition operations in the Retinex noisy image modeling. In this paper, we propose an exponential Retinex decomposition model based on hybrid non-convex regularization and weak space oscillation-modeling for image denoising. The proposed model utilizes non-convex first-order total variation (TV) and non-convex second-order TV to regularize the reflection component and the illumination component, respectively, and employs weak $H^{-1}$ norm to measure the residual component. By utilizing different regularizers, the proposed model effectively decomposes the image into reflection, illumination, and noise components. An alternating direction multipliers method (ADMM) combined with the Majorize-Minimization (MM) algorithm is developed to solve the proposed model. Furthermore, we provide a detailed proof of the convergence property of the algorithm. Numerical experiments validate both the proposed model and algorithm. Compared with several state-of-the-art denoising models, the proposed model exhibits superior performance in terms of peak signal-to-noise ratio (PSNR) and mean structural similarity (MSSIM).

Many well-known logical identities are naturally written as equivalences between contextual formulas. A simple example is the Boole-Shannon expansion $c[p] \equiv (p \wedge c[\mathrm{true}] ) \vee (\neg\, p \wedge c[\mathrm{false}] )$, where $c$ denotes an arbitrary formula with possibly multiple occurrences of a "hole", called a context, and $c[\varphi]$ denotes the result of "filling" all holes of $c$ with the formula $\varphi$. Another example is the unfolding rule $\mu X. c[X] \equiv c[\mu X. c[X]]$ of the modal $\mu$-calculus. We consider the modal $\mu$-calculus as overarching temporal logic and, as usual, reduce the problem whether $\varphi_1 \equiv \varphi_2$ holds for contextual formulas $\varphi_1, \varphi_2$ to the problem whether $\varphi_1 \leftrightarrow \varphi_2$ is valid . We show that the problem whether a contextual formula of the $\mu$-calculus is valid for all contexts can be reduced to validity of ordinary formulas. Our first result constructs a canonical context such that a formula is valid for all contexts if{}f it is valid for this particular one. However, the ordinary formula is exponential in the nesting-depth of the context variables. In a second result we solve this problem, thus proving that validity of contextual formulas is EXP-complete, as for ordinary equivalences. We also prove that both results hold for CTL and LTL as well. We conclude the paper with some experimental results. In particular, we use our implementation to automatically prove the correctness of a set of six contextual equivalences of LTL recently introduced by Esparza et al. for the normalization of LTL formulas. While Esparza et al. need several pages of manual proof, our tool only needs milliseconds to do the job and to compute counterexamples for incorrect variants of the equivalences.

We consider the following generalization of the bin packing problem. We are given a set of items each of which is associated with a rational size in the interval [0,1], and a monotone non-decreasing non-negative cost function f defined over the cardinalities of the subsets of items. A feasible solution is a partition of the set of items into bins subject to the constraint that the total size of items in every bin is at most 1. Unlike bin packing, the goal function is to minimize the total cost of the bins where the cost of a bin is the value of f applied on the cardinality of the subset of items packed into the bin. We present an APTAS for this strongly NP-hard problem. We also provide a complete complexity classification of the problem with respect to the choice of f.

We propose a topological mapping and localization system able to operate on real human colonoscopies, despite significant shape and illumination changes. The map is a graph where each node codes a colon location by a set of real images, while edges represent traversability between nodes. For close-in-time images, where scene changes are minor, place recognition can be successfully managed with the recent transformers-based local feature matching algorithms. However, under long-term changes -- such as different colonoscopies of the same patient -- feature-based matching fails. To address this, we train on real colonoscopies a deep global descriptor achieving high recall with significant changes in the scene. The addition of a Bayesian filter boosts the accuracy of long-term place recognition, enabling relocalization in a previously built map. Our experiments show that ColonMapper is able to autonomously build a map and localize against it in two important use cases: localization within the same colonoscopy or within different colonoscopies of the same patient. Code: //github.com/jmorlana/ColonMapper.

We discuss computing with hierarchies of families of (potentially weighted) semiclassical Jacobi polynomials which arise in the construction of multivariate orthogonal polynomials. In particular, we outline how to build connection and differentiation matrices with optimal complexity and compute analysis and synthesis operations in quasi-optimal complexity. We investigate a particular application of these results to constructing orthogonal polynomials in annuli, called the generalised Zernike annular polynomials, which lead to sparse discretisations of partial differential equations. We compare against a scaled-and-shifted Chebyshev--Fourier series showing that in general the annular polynomials converge faster when approximating smooth functions and have better conditioning. We also construct a sparse spectral element method by combining disk and annulus cells, which is highly effective for solving PDEs with radially discontinuous variable coefficients and data.

A new approach based on censoring and moment criterion is introduced for parameter estimation of count distributions when the probability generating function is available even though a closed form of the probability mass function and/or finite moments do not exist.

We prove the convergence of a damped Newton's method for the nonlinear system resulting from a discretization of the second boundary value problem for the Monge-Ampere equation. The boundary condition is enforced through the use of the notion of asymptotic cone. The differential operator is discretized based on a discrete analogue of the subdifferential.

One of the most promising applications of machine learning (ML) in computational physics is to accelerate the solution of partial differential equations (PDEs). The key objective of ML-based PDE solvers is to output a sufficiently accurate solution faster than standard numerical methods, which are used as a baseline comparison. We first perform a systematic review of the ML-for-PDE solving literature. Of articles that use ML to solve a fluid-related PDE and claim to outperform a standard numerical method, we determine that 79% (60/76) compare to a weak baseline. Second, we find evidence that reporting biases, especially outcome reporting bias and publication bias, are widespread. We conclude that ML-for-PDE solving research is overoptimistic: weak baselines lead to overly positive results, while reporting biases lead to underreporting of negative results. To a large extent, these issues appear to be caused by factors similar to those of past reproducibility crises: researcher degrees of freedom and a bias towards positive results. We call for bottom-up cultural changes to minimize biased reporting as well as top-down structural reforms intended to reduce perverse incentives for doing so.

In the finite difference approximation of the fractional Laplacian the stiffness matrix is typically dense and needs to be approximated numerically. The effect of the accuracy in approximating the stiffness matrix on the accuracy in the whole computation is analyzed and shown to be significant. Four such approximations are discussed. While they are shown to work well with the recently developed grid-over finite difference method (GoFD) for the numerical solution of boundary value problems of the fractional Laplacian, they differ in accuracy, economics to compute, performance of preconditioning, and asymptotic decay away from the diagonal line. In addition, two preconditioners based on sparse and circulant matrices are discussed for the iterative solution of linear systems associated with the stiffness matrix. Numerical results in two and three dimensions are presented.

北京阿比特科技有限公司