亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Due to their increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence. Many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and a variety of approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. A comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and not reducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks, ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for the calibration of neural networks and give an overview of existing baselines and implementations. Different examples from the wide spectrum of challenges in different fields give an idea of the needs and challenges regarding uncertainties in practical applications. Additionally, the practical limitations of current methods for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given.

相關內容

神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)(Neural Networks)是世界上三個最古老(lao)的(de)(de)(de)(de)神(shen)(shen)經(jing)(jing)(jing)(jing)建模學(xue)(xue)(xue)(xue)(xue)會的(de)(de)(de)(de)檔案期刊:國際神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)學(xue)(xue)(xue)(xue)(xue)會(INNS)、歐(ou)洲神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)學(xue)(xue)(xue)(xue)(xue)會(ENNS)和(he)(he)(he)(he)日本神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)學(xue)(xue)(xue)(xue)(xue)會(JNNS)。神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)提供了一(yi)個論(lun)(lun)壇,以發(fa)展和(he)(he)(he)(he)培(pei)育一(yi)個國際社會的(de)(de)(de)(de)學(xue)(xue)(xue)(xue)(xue)者和(he)(he)(he)(he)實(shi)踐者感興趣的(de)(de)(de)(de)所(suo)有(you)方面(mian)(mian)的(de)(de)(de)(de)神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)和(he)(he)(he)(he)相關(guan)方法的(de)(de)(de)(de)計(ji)算(suan)智能。神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)歡迎高質量論(lun)(lun)文的(de)(de)(de)(de)提交,有(you)助于全面(mian)(mian)的(de)(de)(de)(de)神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)研究(jiu),從行(xing)為(wei)和(he)(he)(he)(he)大腦建模,學(xue)(xue)(xue)(xue)(xue)習算(suan)法,通過數學(xue)(xue)(xue)(xue)(xue)和(he)(he)(he)(he)計(ji)算(suan)分析,系統的(de)(de)(de)(de)工(gong)(gong)程和(he)(he)(he)(he)技(ji)術應用,大量使用神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)的(de)(de)(de)(de)概念和(he)(he)(he)(he)技(ji)術。這一(yi)獨特而廣泛的(de)(de)(de)(de)范圍促進了生物和(he)(he)(he)(he)技(ji)術研究(jiu)之間的(de)(de)(de)(de)思想(xiang)交流(liu),并有(you)助于促進對生物啟發(fa)的(de)(de)(de)(de)計(ji)算(suan)智能感興趣的(de)(de)(de)(de)跨(kua)學(xue)(xue)(xue)(xue)(xue)科(ke)社區的(de)(de)(de)(de)發(fa)展。因此,神(shen)(shen)經(jing)(jing)(jing)(jing)網(wang)(wang)絡(luo)(luo)(luo)(luo)編(bian)委(wei)會代表(biao)的(de)(de)(de)(de)專(zhuan)家領域包括心理學(xue)(xue)(xue)(xue)(xue),神(shen)(shen)經(jing)(jing)(jing)(jing)生物學(xue)(xue)(xue)(xue)(xue),計(ji)算(suan)機(ji)科(ke)學(xue)(xue)(xue)(xue)(xue),工(gong)(gong)程,數學(xue)(xue)(xue)(xue)(xue),物理。該雜志發(fa)表(biao)文章、信件和(he)(he)(he)(he)評論(lun)(lun)以及(ji)給(gei)編(bian)輯的(de)(de)(de)(de)信件、社論(lun)(lun)、時事(shi)、軟件調查和(he)(he)(he)(he)專(zhuan)利信息。文章發(fa)表(biao)在五個部分之一(yi):認知科(ke)學(xue)(xue)(xue)(xue)(xue),神(shen)(shen)經(jing)(jing)(jing)(jing)科(ke)學(xue)(xue)(xue)(xue)(xue),學(xue)(xue)(xue)(xue)(xue)習系統,數學(xue)(xue)(xue)(xue)(xue)和(he)(he)(he)(he)計(ji)算(suan)分析、工(gong)(gong)程和(he)(he)(he)(he)應用。 官網(wang)(wang)地址:

Neural networks are ubiquitous in many tasks, but trusting their predictions is an open issue. Uncertainty quantification is required for many applications, and disentangled aleatoric and epistemic uncertainties are best. In this paper, we generalize methods to produce disentangled uncertainties to work with different uncertainty quantification methods, and evaluate their capability to produce disentangled uncertainties. Our results show that: there is an interaction between learning aleatoric and epistemic uncertainty, which is unexpected and violates assumptions on aleatoric uncertainty, some methods like Flipout produce zero epistemic uncertainty, aleatoric uncertainty is unreliable in the out-of-distribution setting, and Ensembles provide overall the best disentangling quality. We also explore the error produced by the number of samples hyper-parameter in the sampling softmax function, recommending N > 100 samples. We expect that our formulation and results help practitioners and researchers choose uncertainty methods and expand the use of disentangled uncertainties, as well as motivate additional research into this topic.

Epistemic uncertainty is the part of out-of-sample prediction error due to the lack of knowledge of the learner. Whereas previous work was focusing on model variance, we propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty, i.e., intrinsic unpredictability. This estimator of epistemic uncertainty includes the effect of model bias (or misspecification) and is useful in interactive learning environments arising in active learning or reinforcement learning. In addition to discussing these properties of Direct Epistemic Uncertainty Prediction (DEUP), we illustrate its advantage against existing methods for uncertainty estimation on downstream tasks including sequential model optimization and reinforcement learning. We also evaluate the quality of uncertainty estimates from DEUP for probabilistic classification of images and for estimating uncertainty about synergistic drug combinations.

Computer models are widely used in decision support for energy systems operation, planning and policy. A system of models is often employed, where model inputs themselves arise from other computer models, with each model being developed by different teams of experts. Gaussian Process emulators can be used to approximate the behaviour of complex, computationally intensive models and used to generate predictions together with a measure of uncertainty about the predicted model output. This paper presents a computationally efficient framework for propagating uncertainty within a network of models with high-dimensional outputs used for energy planning. We present a case study from a UK county council considering low carbon technologies to transform its infrastructure to reach a net-zero carbon target. The system model considered for this case study is simple, however the framework can be applied to larger networks of more complex models.

Generalization to out-of-distribution (OOD) data is a capability natural to humans yet challenging for machines to reproduce. This is because most learning algorithms strongly rely on the i.i.d.~assumption on source/target data, which is often violated in practice due to domain shift. Domain generalization (DG) aims to achieve OOD generalization by using only source data for model learning. Since first introduced in 2011, research in DG has made great progresses. In particular, intensive research in this topic has led to a broad spectrum of methodologies, e.g., those based on domain alignment, meta-learning, data augmentation, or ensemble learning, just to name a few; and has covered various vision applications such as object recognition, segmentation, action recognition, and person re-identification. In this paper, for the first time a comprehensive literature review is provided to summarize the developments in DG for computer vision over the past decade. Specifically, we first cover the background by formally defining DG and relating it to other research fields like domain adaptation and transfer learning. Second, we conduct a thorough review into existing methods and present a categorization based on their methodologies and motivations. Finally, we conclude this survey with insights and discussions on future research directions.

Deep learning methods are achieving ever-increasing performance on many artificial intelligence tasks. A major limitation of deep models is that they are not amenable to interpretability. This limitation can be circumvented by developing post hoc techniques to explain the predictions, giving rise to the area of explainability. Recently, explainability of deep models on images and texts has achieved significant progress. In the area of graph data, graph neural networks (GNNs) and their explainability are experiencing rapid developments. However, there is neither a unified treatment of GNN explainability methods, nor a standard benchmark and testbed for evaluations. In this survey, we provide a unified and taxonomic view of current GNN explainability methods. Our unified and taxonomic treatments of this subject shed lights on the commonalities and differences of existing methods and set the stage for further methodological developments. To facilitate evaluations, we generate a set of benchmark graph datasets specifically for GNN explainability. We summarize current datasets and metrics for evaluating GNN explainability. Altogether, this work provides a unified methodological treatment of GNN explainability and a standardized testbed for evaluations.

A comprehensive artificial intelligence system needs to not only perceive the environment with different `senses' (e.g., seeing and hearing) but also infer the world's conditional (or even causal) relations and corresponding uncertainty. The past decade has seen major advances in many perception tasks such as visual object recognition and speech recognition using deep learning models. For higher-level inference, however, probabilistic graphical models with their Bayesian nature are still more powerful and flexible. In recent years, Bayesian deep learning has emerged as a unified probabilistic framework to tightly integrate deep learning and Bayesian models. In this general framework, the perception of text or images using deep learning can boost the performance of higher-level inference and in turn, the feedback from the inference process is able to enhance the perception of text or images. This survey provides a comprehensive introduction to Bayesian deep learning and reviews its recent applications on recommender systems, topic models, control, etc. Besides, we also discuss the relationship and differences between Bayesian deep learning and other related topics such as Bayesian treatment of neural networks.

Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, and Wide ResNet 28-10 architectures, our methodology improves upon both deep and batch ensembles.

The notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology. In line with the statistical tradition, uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions. Yet, due to the steadily increasing relevance of machine learning for practical applications and related issues such as safety requirements, new problems and challenges have recently been identified by machine learning scholars, and these problems may call for new methodological developments. In particular, this includes the importance of distinguishing between (at least) two different types of uncertainty, often refereed to as aleatoric and epistemic. In this paper, we provide an introduction to the topic of uncertainty in machine learning as well as an overview of hitherto attempts at handling uncertainty in general and formalizing this distinction in particular.

Reinforcement learning is one of the core components in designing an artificial intelligent system emphasizing real-time response. Reinforcement learning influences the system to take actions within an arbitrary environment either having previous knowledge about the environment model or not. In this paper, we present a comprehensive study on Reinforcement Learning focusing on various dimensions including challenges, the recent development of different state-of-the-art techniques, and future directions. The fundamental objective of this paper is to provide a framework for the presentation of available methods of reinforcement learning that is informative enough and simple to follow for the new researchers and academics in this domain considering the latest concerns. First, we illustrated the core techniques of reinforcement learning in an easily understandable and comparable way. Finally, we analyzed and depicted the recent developments in reinforcement learning approaches. My analysis pointed out that most of the models focused on tuning policy values rather than tuning other things in a particular state of reasoning.

Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.

北京阿比特科技有限公司