亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a correlated \textit{and} gate which may be used to propagate uncertainty and dependence through Boolean functions, since any Boolean function may be expressed as a combination of \textit{and} and \textit{not} operations. We argue that the \textit{and} gate is a bivariate copula family, which has the interpretation of constructing bivariate Bernoulli random variables following a given Pearson correlation coefficient and marginal probabilities. We show how this copula family may be used to propagate uncertainty in the form of probabilities of events, probability intervals, and probability boxes, with only partial or no knowledge of the dependency between events, expressed as an interval for the correlation coefficient. These results generalise previous results by Fr\'echet on the conjunction of two events with unknown dependencies. We show an application propagating uncertainty through a fault tree for a pressure tank. This paper comes with an open-source Julia library for performing uncertainty logic.

相關內容

In a completely randomized experiment, the variances of treatment effect estimators in the finite population are usually not identifiable and hence not estimable. Although some estimable bounds of the variances have been established in the literature, few of them are derived in the presence of covariates. In this paper, the difference-in-means estimator and the Wald estimator are considered in the completely randomized experiment with perfect compliance and noncompliance, respectively. Sharp bounds for the variances of these two estimators are established when covariates are available. Furthermore, consistent estimators for such bounds are obtained, which can be used to shorten the confidence intervals and improve the power of tests. Confidence intervals are constructed based on the consistent estimators of the upper bounds, whose coverage rates are uniformly asymptotically guaranteed. Simulations were conducted to evaluate the proposed methods. The proposed methods are also illustrated with two real data analyses.

With model trustworthiness being crucial for sensitive real-world applications, practitioners are putting more and more focus on improving the uncertainty calibration of deep neural networks. Calibration errors are designed to quantify the reliability of probabilistic predictions but their estimators are usually biased and inconsistent. In this work, we introduce the framework of proper calibration errors, which relates every calibration error to a proper score and provides a respective upper bound with optimal estimation properties. This relationship can be used to reliably quantify the model calibration improvement. We theoretically and empirically demonstrate the shortcomings of commonly used estimators compared to our approach. Due to the wide applicability of proper scores, this gives a natural extension of recalibration beyond classification.

We propose a symbolic execution method for programs that can draw random samples. In contrast to existing work, our method can verify randomized programs with unknown inputs and can prove probabilistic properties that universally quantify over all possible inputs. Our technique augments standard symbolic execution with a new class of \emph{probabilistic symbolic variables}, which represent the results of random draws, and computes symbolic expressions representing the probability of taking individual paths. We implement our method on top of the \textsc{KLEE} symbolic execution engine alongside multiple optimizations and use it to prove properties about probabilities and expected values for a range of challenging case studies written in C++, including Freivalds' algorithm, randomized quicksort, and a randomized property-testing algorithm for monotonicity. We evaluate our method against \textsc{Psi}, an exact probabilistic symbolic inference engine, and \textsc{Storm}, a probabilistic model checker, and show that our method significantly outperforms both tools.

Asymptotic study on the partition function $p(n)$ began with the work of Hardy and Ramanujan. Later Rademacher obtained a convergent series for $p(n)$ and an error bound was given by Lehmer. Despite having this, a full asymptotic expansion for $p(n)$ with an explicit error bound is not known. Recently O'Sullivan studied the asymptotic expansion of $p^{k}(n)$-partitions into $k$th powers, initiated by Wright, and consequently obtained an asymptotic expansion for $p(n)$ along with a concise description of the coefficients involved in the expansion but without any estimation of the error term. Here we consider a detailed and comprehensive analysis on an estimation of the error term obtained by truncating the asymptotic expansion for $p(n)$ at any positive integer $n$. This gives rise to an infinite family of inequalities for $p(n)$ which finally answers to a question proposed by Chen. Our error term estimation predominantly relies on applications of algorithmic methods from symbolic summation.

Point-interactive image colorization aims to colorize grayscale images when a user provides the colors for specific locations. It is essential for point-interactive colorization methods to appropriately propagate user-provided colors (i.e., user hints) in the entire image to obtain a reasonably colorized image with minimal user effort. However, existing approaches often produce partially colorized results due to the inefficient design of stacking convolutional layers to propagate hints to distant relevant regions. To address this problem, we present iColoriT, a novel point-interactive colorization Vision Transformer capable of propagating user hints to relevant regions, leveraging the global receptive field of Transformers. The self-attention mechanism of Transformers enables iColoriT to selectively colorize relevant regions with only a few local hints. Our approach colorizes images in real-time by utilizing pixel shuffling, an efficient upsampling technique that replaces the decoder architecture. Also, in order to mitigate the artifacts caused by pixel shuffling with large upsampling ratios, we present the local stabilizing layer. Extensive quantitative and qualitative results demonstrate that our approach highly outperforms existing methods for point-interactive colorization, producing accurately colorized images with a user's minimal effort. Official codes are available at //pmh9960.github.io/research/iColoriT

Variational Bayesian posterior inference often requires simplifying approximations such as mean-field parametrisation to ensure tractability. However, prior work has associated the variational mean-field approximation for Bayesian neural networks with underfitting in the case of small datasets or large model sizes. In this work, we show that invariances in the likelihood function of over-parametrised models contribute to this phenomenon because these invariances complicate the structure of the posterior by introducing discrete and/or continuous modes which cannot be well approximated by Gaussian mean-field distributions. In particular, we show that the mean-field approximation has an additional gap in the evidence lower bound compared to a purpose-built posterior that takes into account the known invariances. Importantly, this invariance gap is not constant; it vanishes as the approximation reverts to the prior. We proceed by first considering translation invariances in a linear model with a single data point in detail. We show that, while the true posterior can be constructed from a mean-field parametrisation, this is achieved only if the objective function takes into account the invariance gap. Then, we transfer our analysis of the linear model to neural networks. Our analysis provides a framework for future work to explore solutions to the invariance problem.

Background: Instrumental variables (IVs) can be used to provide evidence as to whether a treatment X has a causal effect on an outcome Y. Even if the instrument Z satisfies the three core IV assumptions of relevance, independence and the exclusion restriction, further assumptions are required to identify the average causal effect (ACE) of X on Y. Sufficient assumptions for this include: homogeneity in the causal effect of X on Y; homogeneity in the association of Z with X; and no effect modification (NEM). Methods: We describe the NO Simultaneous Heterogeneity (NOSH) assumption, which requires the heterogeneity in the X-Y causal effect to be mean independent of (i.e., uncorrelated with) both Z and heterogeneity in the Z-X association. This happens, for example, if there are no common modifiers of the X-Y effect and the Z-X association, and the X-Y effect is additive linear. We illustrate NOSH using simulations and by re-examining selected published studies. Results: When NOSH holds, the Wald estimand equals the ACE even if both homogeneity assumptions and NEM (which we demonstrate to be special cases of - and therefore stronger than - NOSH) are violated. Conclusions: NOSH is sufficient for identifying the ACE using IVs. Since NOSH is weaker than existing assumptions for ACE identification, doing so may be more plausible than previously anticipated.

Model mismatches prevail in real-world applications. Hence it is important to design robust safe control algorithms for systems with uncertain dynamic models. The major challenge is that uncertainty results in difficulty in finding a feasible safe control in real-time. Existing methods usually simplify the problem such as restricting uncertainty type, ignoring control limits, or forgoing feasibility guarantees. In this work, we overcome these issues by proposing a robust safe control framework for bounded state-dependent uncertainties. We first guarantee the feasibility of safe control for uncertain dynamics by learning a control-limits-aware, uncertainty-robust safety index. Then we show that robust safe control can be formulated as convex problems (Convex Semi-Infinite Programming or Second-Order Cone Programming) and propose corresponding optimal solvers that can run in real-time. In addition, we analyze when and how safety can be preserved under unmodeled uncertainties. Experiment results show that our method successfully finds robust safe control in real-time for different uncertainties and is much less conservative than a strong baseline algorithm.

Due to their increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence. Many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and a variety of approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. A comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and not reducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks, ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for the calibration of neural networks and give an overview of existing baselines and implementations. Different examples from the wide spectrum of challenges in different fields give an idea of the needs and challenges regarding uncertainties in practical applications. Additionally, the practical limitations of current methods for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given.

Embedding models for deterministic Knowledge Graphs (KG) have been extensively studied, with the purpose of capturing latent semantic relations between entities and incorporating the structured knowledge into machine learning. However, there are many KGs that model uncertain knowledge, which typically model the inherent uncertainty of relations facts with a confidence score, and embedding such uncertain knowledge represents an unresolved challenge. The capturing of uncertain knowledge will benefit many knowledge-driven applications such as question answering and semantic search by providing more natural characterization of the knowledge. In this paper, we propose a novel uncertain KG embedding model UKGE, which aims to preserve both structural and uncertainty information of relation facts in the embedding space. Unlike previous models that characterize relation facts with binary classification techniques, UKGE learns embeddings according to the confidence scores of uncertain relation facts. To further enhance the precision of UKGE, we also introduce probabilistic soft logic to infer confidence scores for unseen relation facts during training. We propose and evaluate two variants of UKGE based on different learning objectives. Experiments are conducted on three real-world uncertain KGs via three tasks, i.e. confidence prediction, relation fact ranking, and relation fact classification. UKGE shows effectiveness in capturing uncertain knowledge by achieving promising results on these tasks, and consistently outperforms baselines on these tasks.

北京阿比特科技有限公司