亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Defining a successful notion of a multivariate quantile has been an open problem for more than half a century, motivating a plethora of possible solutions. Of these, the approach of [8] and [25] leading to M-quantiles, is very appealing for its mathematical elegance combining elements of convex analysis and probability theory. The key idea is the description of a convex function (the K-function) whose gradient (the K-transform) is in one-to-one correspondence between all of R^d and the unit ball in R^d. By analogy with the d=1 case where the K-transform is a cumulative distribution function-like object (an M-distribution), the fact that its inverse is guaranteed to exist lends itself naturally to providing the basis for the definition of a quantile function for all d>=1. Over the past twenty years the resulting M-quantiles have seen applications in a variety of fields, primarily for the purpose of detecting outliers in multidimensional spaces. In this article we prove that for odd d>=3, it is not the gradient but a poly-Laplacian of the K-function that is (almost everywhere) proportional to the density function. For d even one cannot establish a differential equation connecting the K-function with the density. These results show that usage of the K-transform for outlier detection in higher odd-dimensions is in principle flawed, as the K-transform does not originate from inversion of a true M-distribution. We demonstrate these conclusions in two dimensions through examples from non-standard asymmetric distributions. Our examples illustrate a feature of the K-transform whereby regions in the domain with higher density map to larger volumes in the co-domain, thereby producing a magnification effect that moves inliers closer to the boundary of the co-domain than outliers. This feature obviously disrupts any outlier detection mechanism that relies on the inverse K-transform.

相關內容

Randomized matrix algorithms have become workhorse tools in scientific computing and machine learning. To use these algorithms safely in applications, they should be coupled with posterior error estimates to assess the quality of the output. To meet this need, this paper proposes two diagnostics: a leave-one-out error estimator for randomized low-rank approximations and a jackknife resampling method to estimate the variance of the output of a randomized matrix computation. Both of these diagnostics are rapid to compute for randomized low-rank approximation algorithms such as the randomized SVD and randomized Nystr\"om approximation, and they provide useful information that can be used to assess the quality of the computed output and guide algorithmic parameter choices.

We prove explicit uniform two-sided bounds for the phase functions of Bessel functions and of their derivatives. As a consequence, we obtain new enclosures for the zeros of Bessel functions and their derivatives in terms of inverse values of some elementary functions. These bounds are valid, with a few exceptions, for all zeros and all Bessel functions with non-negative indices. We provide numerical evidence showing that our bounds either improve or closely match the best previously known ones.

Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.

We propose a novel algorithm for the support estimation of partially known Gaussian graphical models that incorporates prior information about the underlying graph. In contrast to classical approaches that provide a point estimate based on a maximum likelihood or a maximum a posteriori criterion using (simple) priors on the precision matrix, we consider a prior on the graph and rely on annealed Langevin diffusion to generate samples from the posterior distribution. Since the Langevin sampler requires access to the score function of the underlying graph prior, we use graph neural networks to effectively estimate the score from a graph dataset (either available beforehand or generated from a known distribution). Numerical experiments demonstrate the benefits of our approach.

Mendelian randomization uses genetic variants as instrumental variables to make causal inferences about the effects of modifiable risk factors on diseases from observational data. One of the major challenges in Mendelian randomization is that many genetic variants are only modestly or even weakly associated with the risk factor of interest, a setting known as many weak instruments. Many existing methods, such as the popular inverse-variance weighted (IVW) method, could be biased when the instrument strength is weak. To address this issue, the debiased IVW (dIVW) estimator, which is shown to be robust to many weak instruments, was recently proposed. However, this estimator still has non-ignorable bias when the effective sample size is small. In this paper, we propose a modified debiased IVW (mdIVW) estimator by multiplying a modification factor to the original dIVW estimator. After this simple correction, we show that the bias of the mdIVW estimator converges to zero at a faster rate than that of the dIVW estimator under some regularity conditions. Moreover, the mdIVW estimator has smaller variance than the dIVW estimator.We further extend the proposed method to account for the presence of instrumental variable selection and balanced horizontal pleiotropy. We demonstrate the improvement of the mdIVW estimator over the dIVW estimator through extensive simulation studies and real data analysis.

Genome assembly is a prominent problem studied in bioinformatics, which computes the source string using a set of its overlapping substrings. Classically, genome assembly uses assembly graphs built using this set of substrings to compute the source string efficiently, having a tradeoff between scalability and avoiding information loss. The scalable de Bruijn graphs come at the price of losing crucial overlap information. The complete overlap information is stored in overlap graphs using quadratic space. Hierarchical overlap graphs [IPL20] (HOG) overcome these limitations, avoiding information loss despite using linear space. After a series of suboptimal improvements, Khan and Park et al. simultaneously presented two optimal algorithms [CPM2021], where only the former was seemingly practical. We empirically analyze all the practical algorithms for computing HOG, where the optimal algorithm [CPM2021] outperforms the previous algorithms as expected, though at the expense of extra memory. However, it uses non-intuitive approach and non-trivial data structures. We present arguably the most intuitive algorithm, using only elementary arrays, which is also optimal. Our algorithm empirically proves even better for both time and memory over all the algorithms, highlighting its significance in both theory and practice. We further explore the applications of hierarchical overlap graphs to solve various forms of suffix-prefix queries on a set of strings. Loukides et al. [CPM2023] recently presented state-of-the-art algorithms for these queries. However, these algorithms require complex black-box data structures and are seemingly impractical. Our algorithms, despite failing to match the state-of-the-art algorithms theoretically, answer different queries ranging from 0.01-100 milliseconds for a data set having around a billion characters.

For the stochastic heat equation with multiplicative noise we consider the problem of estimating the diffusivity parameter in front of the Laplace operator. Based on local observations in space, we first study an estimator that was derived for additive noise. A stable central limit theorem shows that this estimator is consistent and asymptotically mixed normal. By taking into account the quadratic variation, we propose two new estimators. Their limiting distributions exhibit a smaller (conditional) variance and the last estimator also works for vanishing noise levels. The proofs are based on local approximation results to overcome the intricate nonlinearities and on a stable central limit theorem for stochastic integrals with respect to cylindrical Brownian motion. Simulation results illustrate the theoretical findings.

In the context of interactive theorem provers based on a dependent type theory, automation tactics (dedicated decision procedures, call of automated solvers, ...) are often limited to goals which are exactly in some expected logical fragment. This very often prevents users from applying these tactics in other contexts, even similar ones. This paper discusses the design and the implementation of pre-processing operations for automating formal proofs in the Coq proof assistant. It presents the implementation of a wide variety of predictible, atomic goal transformations, which can be composed in various ways to target different backends. A gallery of examples illustrates how it helps to expand significantly the power of automation engines.

By computing a feedback control via the linear quadratic regulator (LQR) approach and simulating a non-linear non-autonomous closed-loop system using this feedback, we combine two numerically challenging tasks. For the first task, the computation of the feedback control, we use the non-autonomous generalized differential Riccati equation (DRE), whose solution determines the time-varying feedback gain matrix. Regarding the second task, we want to be able to simulate non-linear closed-loop systems for which it is known that the regulator is only valid for sufficiently small perturbations. Thus, one easily runs into numerical issues in the integrators when the closed-loop control varies greatly. For these systems, e.g., the A-stable implicit Euler methods fails.\newline On the one hand, we implement non-autonomous versions of splitting schemes and BDF methods for the solution of our non-autonomous DREs. These are well-established DRE solvers in the autonomous case. On the other hand, to tackle the numerical issues in the simulation of the non-linear closed-loop system, we apply a fractional-step-theta scheme with time-adaptivity tuned specifically to this kind of challenge. That is, we additionally base the time-adaptivity on the activity of the control. We compare this approach to the more classical error-based time-adaptivity.\newline We describe techniques to make these two tasks computable in a reasonable amount of time and are able to simulate closed-loop systems with strongly varying controls, while avoiding numerical issues. Our time-adaptivity approach requires fewer time steps than the error-based alternative and is more reliable.

Accurate triangulation of the domain plays a pivotal role in computing the numerical approximation of the differential operators. A good triangulation is the one which aids in reducing discretization errors. In a standard collocation technique, the smooth curved domain is typically triangulated with a mesh by taking points on the boundary to approximate them by polygons. However, such an approach often leads to geometrical errors which directly affect the accuracy of the numerical approximation. To restrict such geometrical errors, \textit{isoparametric}, \textit{subparametric}, and \textit{iso-geometric} methods were introduced which allow the approximation of the curved surfaces (or curved line segments). In this paper, we present an efficient finite element method to approximate the solution to the elliptic boundary value problem (BVP), which governs the response of an elastic solid containing a v-notch and inclusions. The algebraically nonlinear constitutive equation along with the balance of linear momentum reduces to second-order quasi-linear elliptic partial differential equation. Our approach allows us to represent the complex curved boundaries by smooth \textit{one-of-its-kind} point transformation. The main idea is to obtain higher-order shape functions which enable us to accurately compute the entries in the finite element matrices and vectors. A Picard-type linearization is utilized to handle the nonlinearities in the governing differential equation. The numerical results for the test cases show considerable improvement in the accuracy.

北京阿比特科技有限公司