亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Probabilistic deep learning is deep learning that accounts for uncertainty, both model uncertainty and data uncertainty. It is based on the use of probabilistic models and deep neural networks. We distinguish two approaches to probabilistic deep learning: probabilistic neural networks and deep probabilistic models. The former employs deep neural networks that utilize probabilistic layers which can represent and process uncertainty; the latter uses probabilistic models that incorporate deep neural network components which capture complex non-linear stochastic relationships between the random variables. We discuss some major examples of each approach including Bayesian neural networks and mixture density networks (for probabilistic neural networks), and variational autoencoders, deep Gaussian processes and deep mixed effects models (for deep probabilistic models). TensorFlow Probability is a library for probabilistic modeling and inference which can be used for both approaches of probabilistic deep learning. We include its code examples for illustration.

相關內容

神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)(wang)絡(luo)(Neural Networks)是世界上三個最(zui)古老(lao)的(de)(de)(de)(de)(de)(de)神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)建模(mo)學(xue)(xue)(xue)(xue)(xue)會的(de)(de)(de)(de)(de)(de)檔案(an)期刊:國(guo)際神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)(wang)絡(luo)學(xue)(xue)(xue)(xue)(xue)會(INNS)、歐(ou)洲(zhou)神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)(wang)絡(luo)學(xue)(xue)(xue)(xue)(xue)會(ENNS)和(he)(he)(he)(he)(he)(he)日(ri)本神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)(wang)絡(luo)學(xue)(xue)(xue)(xue)(xue)會(JNNS)。神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)(wang)絡(luo)提供(gong)了(le)一(yi)個論壇,以發(fa)(fa)展和(he)(he)(he)(he)(he)(he)培育一(yi)個國(guo)際社(she)會的(de)(de)(de)(de)(de)(de)學(xue)(xue)(xue)(xue)(xue)者和(he)(he)(he)(he)(he)(he)實踐者感興趣(qu)的(de)(de)(de)(de)(de)(de)所有(you)方(fang)面(mian)的(de)(de)(de)(de)(de)(de)神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)(wang)絡(luo)和(he)(he)(he)(he)(he)(he)相關方(fang)法(fa)的(de)(de)(de)(de)(de)(de)計(ji)算(suan)(suan)(suan)智(zhi)能。神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)(wang)絡(luo)歡(huan)迎高質量論文(wen)的(de)(de)(de)(de)(de)(de)提交(jiao),有(you)助于(yu)全面(mian)的(de)(de)(de)(de)(de)(de)神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)(wang)絡(luo)研究(jiu),從行為和(he)(he)(he)(he)(he)(he)大腦建模(mo),學(xue)(xue)(xue)(xue)(xue)習(xi)算(suan)(suan)(suan)法(fa),通(tong)過數學(xue)(xue)(xue)(xue)(xue)和(he)(he)(he)(he)(he)(he)計(ji)算(suan)(suan)(suan)分(fen)析,系(xi)統的(de)(de)(de)(de)(de)(de)工程(cheng)和(he)(he)(he)(he)(he)(he)技術應用(yong)(yong),大量使用(yong)(yong)神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)(wang)絡(luo)的(de)(de)(de)(de)(de)(de)概念和(he)(he)(he)(he)(he)(he)技術。這一(yi)獨特而(er)廣泛的(de)(de)(de)(de)(de)(de)范圍促進了(le)生(sheng)(sheng)物和(he)(he)(he)(he)(he)(he)技術研究(jiu)之間的(de)(de)(de)(de)(de)(de)思想交(jiao)流,并有(you)助于(yu)促進對(dui)生(sheng)(sheng)物啟(qi)發(fa)(fa)的(de)(de)(de)(de)(de)(de)計(ji)算(suan)(suan)(suan)智(zhi)能感興趣(qu)的(de)(de)(de)(de)(de)(de)跨學(xue)(xue)(xue)(xue)(xue)科社(she)區的(de)(de)(de)(de)(de)(de)發(fa)(fa)展。因(yin)此,神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)網(wang)(wang)(wang)(wang)絡(luo)編委(wei)會代表的(de)(de)(de)(de)(de)(de)專家領域包括(kuo)心(xin)理(li)學(xue)(xue)(xue)(xue)(xue),神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)生(sheng)(sheng)物學(xue)(xue)(xue)(xue)(xue),計(ji)算(suan)(suan)(suan)機科學(xue)(xue)(xue)(xue)(xue),工程(cheng),數學(xue)(xue)(xue)(xue)(xue),物理(li)。該雜志發(fa)(fa)表文(wen)章、信(xin)(xin)件和(he)(he)(he)(he)(he)(he)評論以及給編輯的(de)(de)(de)(de)(de)(de)信(xin)(xin)件、社(she)論、時(shi)事(shi)、軟件調查和(he)(he)(he)(he)(he)(he)專利(li)信(xin)(xin)息。文(wen)章發(fa)(fa)表在(zai)五(wu)個部分(fen)之一(yi):認知科學(xue)(xue)(xue)(xue)(xue),神(shen)(shen)(shen)(shen)(shen)經(jing)(jing)(jing)科學(xue)(xue)(xue)(xue)(xue),學(xue)(xue)(xue)(xue)(xue)習(xi)系(xi)統,數學(xue)(xue)(xue)(xue)(xue)和(he)(he)(he)(he)(he)(he)計(ji)算(suan)(suan)(suan)分(fen)析、工程(cheng)和(he)(he)(he)(he)(he)(he)應用(yong)(yong)。 官網(wang)(wang)(wang)(wang)地址:

Learning the structure of a causal graphical model using both observational and interventional data is a fundamental problem in many scientific fields. A promising direction is continuous optimization for score-based methods, which efficiently learn the causal graph in a data-driven manner. However, to date, those methods require constrained optimization to enforce acyclicity or lack convergence guarantees. In this paper, we present ENCO, an efficient structure learning method for directed, acyclic causal graphs leveraging observational and interventional data. ENCO formulates the graph search as an optimization of independent edge likelihoods, with the edge orientation being modeled as a separate parameter. Consequently, we can provide convergence guarantees of ENCO under mild conditions without constraining the score function with respect to acyclicity. In experiments, we show that ENCO can efficiently recover graphs with hundreds of nodes, an order of magnitude larger than what was previously possible, while handling deterministic variables and latent confounders.

Metric learning especially deep metric learning has been widely developed for large-scale image inputs data. However, in many real-world applications, we can only have access to vectorized inputs data. Moreover, on one hand, well-labeled data is usually limited due to the high annotation cost. On the other hand, the real data is commonly streaming data, which requires to be processed online. In these scenarios, the fashionable deep metric learning is not suitable anymore. To this end, we reconsider the traditional shallow online metric learning and newly develop an online progressive deep metric learning (ODML) framework to construct a metric-algorithm-based deep network. Specifically, we take an online metric learning algorithm as a metric-algorithm-based layer (i.e., metric layer), followed by a nonlinear layer, and then stack these layers in a fashion similar to deep learning. Different from the shallow online metric learning, which can only learn one metric space (feature transformation), the proposed ODML is able to learn multiple hierarchical metric spaces. Furthermore, in a progressively and nonlinearly learning way, ODML has a stronger learning ability than traditional shallow online metric learning in the case of limited available training data. To make the learning process more explainable and theoretically guaranteed, we also provide theoretical analysis. The proposed ODML enjoys several nice properties and can indeed learn a metric progressively and performs better on the benchmark datasets. Extensive experiments with different settings have been conducted to verify these properties of the proposed ODML.

In this paper, we study the properties of robust nonparametric estimation using deep neural networks for regression models with heavy tailed error distributions. We establish the non-asymptotic error bounds for a class of robust nonparametric regression estimators using deep neural networks with ReLU activation under suitable smoothness conditions on the regression function and mild conditions on the error term. In particular, we only assume that the error distribution has a finite p-th moment with p greater than one. We also show that the deep robust regression estimators are able to circumvent the curse of dimensionality when the distribution of the predictor is supported on an approximate lower-dimensional set. An important feature of our error bound is that, for ReLU neural networks with network width and network size (number of parameters) no more than the order of the square of the dimensionality d of the predictor, our excess risk bounds depend sub-linearly on d. Our assumption relaxes the exact manifold support assumption, which could be restrictive and unrealistic in practice. We also relax several crucial assumptions on the data distribution, the target regression function and the neural networks required in the recent literature. Our simulation studies demonstrate that the robust methods can significantly outperform the least squares method when the errors have heavy-tailed distributions and illustrate that the choice of loss function is important in the context of deep nonparametric regression.

Markov Logic Networks (MLNs), which elegantly combine logic rules and probabilistic graphical models, can be used to address many knowledge graph problems. However, inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult. In recent years, graph neural networks (GNNs) have emerged as efficient and effective tools for large-scale graph problems. Nevertheless, GNNs do not explicitly incorporate prior logic rules into the models, and may require many labeled examples for a target task. In this paper, we explore the combination of MLNs and GNNs, and use graph neural networks for variational inference in MLN. We propose a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model. Our extensive experiments on several benchmark datasets demonstrate that ExpressGNN leads to effective and efficient probabilistic logic reasoning.

Knowledge graph reasoning, which aims at predicting the missing facts through reasoning with the observed facts, is critical to many applications. Such a problem has been widely explored by traditional logic rule-based approaches and recent knowledge graph embedding methods. A principled logic rule-based approach is the Markov Logic Network (MLN), which is able to leverage domain knowledge with first-order logic and meanwhile handle their uncertainty. However, the inference of MLNs is usually very difficult due to the complicated graph structures. Different from MLNs, knowledge graph embedding methods (e.g. TransE, DistMult) learn effective entity and relation embeddings for reasoning, which are much more effective and efficient. However, they are unable to leverage domain knowledge. In this paper, we propose the probabilistic Logic Neural Network (pLogicNet), which combines the advantages of both methods. A pLogicNet defines the joint distribution of all possible triplets by using a Markov logic network with first-order logic, which can be efficiently optimized with the variational EM algorithm. In the E-step, a knowledge graph embedding model is used for inferring the missing triplets, while in the M-step, the weights of logic rules are updated based on both the observed and predicted triplets. Experiments on multiple knowledge graphs prove the effectiveness of pLogicNet over many competitive baselines.

Deep learning (DL) is a high dimensional data reduction technique for constructing high-dimensional predictors in input-output models. DL is a form of machine learning that uses hierarchical layers of latent features. In this article, we review the state-of-the-art of deep learning from a modeling and algorithmic perspective. We provide a list of successful areas of applications in Artificial Intelligence (AI), Image Processing, Robotics and Automation. Deep learning is predictive in its nature rather then inferential and can be viewed as a black-box methodology for high-dimensional function estimation.

Deep learning is the mainstream technique for many machine learning tasks, including image recognition, machine translation, speech recognition, and so on. It has outperformed conventional methods in various fields and achieved great successes. Unfortunately, the understanding on how it works remains unclear. It has the central importance to lay down the theoretic foundation for deep learning. In this work, we give a geometric view to understand deep learning: we show that the fundamental principle attributing to the success is the manifold structure in data, namely natural high dimensional data concentrates close to a low-dimensional manifold, deep learning learns the manifold and the probability distribution on it. We further introduce the concepts of rectified linear complexity for deep neural network measuring its learning capability, rectified linear complexity of an embedding manifold describing the difficulty to be learned. Then we show for any deep neural network with fixed architecture, there exists a manifold that cannot be learned by the network. Finally, we propose to apply optimal mass transportation theory to control the probability distribution in the latent space.

A fundamental computation for statistical inference and accurate decision-making is to compute the marginal probabilities or most probable states of task-relevant variables. Probabilistic graphical models can efficiently represent the structure of such complex data, but performing these inferences is generally difficult. Message-passing algorithms, such as belief propagation, are a natural way to disseminate evidence amongst correlated variables while exploiting the graph structure, but these algorithms can struggle when the conditional dependency graphs contain loops. Here we use Graph Neural Networks (GNNs) to learn a message-passing algorithm that solves these inference tasks. We first show that the architecture of GNNs is well-matched to inference tasks. We then demonstrate the efficacy of this inference approach by training GNNs on a collection of graphical models and showing that they substantially outperform belief propagation on loopy graphs. Our message-passing algorithms generalize out of the training set to larger graphs and graphs with different structure.

Embedding methods which enforce a partial order or lattice structure over the concept space, such as Order Embeddings (OE) (Vendrov et al., 2016), are a natural way to model transitive relational data (e.g. entailment graphs). However, OE learns a deterministic knowledge base, limiting expressiveness of queries and the ability to use uncertainty for both prediction and learning (e.g. learning from expectations). Probabilistic extensions of OE (Lai and Hockenmaier, 2017) have provided the ability to somewhat calibrate these denotational probabilities while retaining the consistency and inductive bias of ordered models, but lack the ability to model the negative correlations found in real-world knowledge. In this work we show that a broad class of models that assign probability measures to OE can never capture negative correlation, which motivates our construction of a novel box lattice and accompanying probability measure to capture anticorrelation and even disjoint concepts, while still providing the benefits of probabilistic modeling, such as the ability to perform rich joint and conditional queries over arbitrary sets of concepts, and both learning from and predicting calibrated uncertainty. We show improvements over previous approaches in modeling the Flickr and WordNet entailment graphs, and investigate the power of the model.

We present an implementation of a probabilistic first-order logic called TensorLog, in which classes of logical queries are compiled into differentiable functions in a neural-network infrastructure such as Tensorflow or Theano. This leads to a close integration of probabilistic logical reasoning with deep-learning infrastructure: in particular, it enables high-performance deep learning frameworks to be used for tuning the parameters of a probabilistic logic. Experimental results show that TensorLog scales to problems involving hundreds of thousands of knowledge-base triples and tens of thousands of examples.

北京阿比特科技有限公司