亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Neural networks have achieved remarkable performance across various problem domains, but their widespread applicability is hindered by inherent limitations such as overconfidence in predictions, lack of interpretability, and vulnerability to adversarial attacks. To address these challenges, Bayesian neural networks (BNNs) have emerged as a compelling extension of conventional neural networks, integrating uncertainty estimation into their predictive capabilities. This comprehensive primer presents a systematic introduction to the fundamental concepts of neural networks and Bayesian inference, elucidating their synergistic integration for the development of BNNs. The target audience comprises statisticians with a potential background in Bayesian methods but lacking deep learning expertise, as well as machine learners proficient in deep neural networks but with limited exposure to Bayesian statistics. We provide an overview of commonly employed priors, examining their impact on model behavior and performance. Additionally, we delve into the practical considerations associated with training and inference in BNNs. Furthermore, we explore advanced topics within the realm of BNN research, acknowledging the existence of ongoing debates and controversies. By offering insights into cutting-edge developments, this primer not only equips researchers and practitioners with a solid foundation in BNNs, but also illuminates the potential applications of this dynamic field. As a valuable resource, it fosters an understanding of BNNs and their promising prospects, facilitating further advancements in the pursuit of knowledge and innovation.

相關內容

神(shen)(shen)經(jing)(jing)(jing)網絡(luo)(luo)(luo)(luo)(luo)(Neural Networks)是世界上(shang)三個(ge)最古老的(de)(de)(de)(de)(de)神(shen)(shen)經(jing)(jing)(jing)建(jian)模學(xue)(xue)(xue)會的(de)(de)(de)(de)(de)檔案期刊:國際神(shen)(shen)經(jing)(jing)(jing)網絡(luo)(luo)(luo)(luo)(luo)學(xue)(xue)(xue)會(INNS)、歐洲(zhou)神(shen)(shen)經(jing)(jing)(jing)網絡(luo)(luo)(luo)(luo)(luo)學(xue)(xue)(xue)會(ENNS)和日本神(shen)(shen)經(jing)(jing)(jing)網絡(luo)(luo)(luo)(luo)(luo)學(xue)(xue)(xue)會(JNNS)。神(shen)(shen)經(jing)(jing)(jing)網絡(luo)(luo)(luo)(luo)(luo)提供了(le)一(yi)個(ge)論壇(tan),以發(fa)展和培(pei)育一(yi)個(ge)國際社(she)會的(de)(de)(de)(de)(de)學(xue)(xue)(xue)者和實踐者感(gan)興趣的(de)(de)(de)(de)(de)所有(you)方面(mian)的(de)(de)(de)(de)(de)神(shen)(shen)經(jing)(jing)(jing)網絡(luo)(luo)(luo)(luo)(luo)和相關方法的(de)(de)(de)(de)(de)計(ji)算(suan)智(zhi)能。神(shen)(shen)經(jing)(jing)(jing)網絡(luo)(luo)(luo)(luo)(luo)歡迎高質量論文的(de)(de)(de)(de)(de)提交,有(you)助于全面(mian)的(de)(de)(de)(de)(de)神(shen)(shen)經(jing)(jing)(jing)網絡(luo)(luo)(luo)(luo)(luo)研究(jiu),從(cong)行為和大腦建(jian)模,學(xue)(xue)(xue)習算(suan)法,通過數(shu)學(xue)(xue)(xue)和計(ji)算(suan)分析(xi)(xi),系(xi)統的(de)(de)(de)(de)(de)工(gong)(gong)程和技(ji)術(shu)應用(yong),大量使用(yong)神(shen)(shen)經(jing)(jing)(jing)網絡(luo)(luo)(luo)(luo)(luo)的(de)(de)(de)(de)(de)概念和技(ji)術(shu)。這一(yi)獨特而廣泛的(de)(de)(de)(de)(de)范圍促進(jin)了(le)生(sheng)物(wu)和技(ji)術(shu)研究(jiu)之間的(de)(de)(de)(de)(de)思想交流,并有(you)助于促進(jin)對生(sheng)物(wu)啟發(fa)的(de)(de)(de)(de)(de)計(ji)算(suan)智(zhi)能感(gan)興趣的(de)(de)(de)(de)(de)跨學(xue)(xue)(xue)科(ke)(ke)社(she)區的(de)(de)(de)(de)(de)發(fa)展。因(yin)此,神(shen)(shen)經(jing)(jing)(jing)網絡(luo)(luo)(luo)(luo)(luo)編(bian)委會代表的(de)(de)(de)(de)(de)專(zhuan)家領域包括心理(li)學(xue)(xue)(xue),神(shen)(shen)經(jing)(jing)(jing)生(sheng)物(wu)學(xue)(xue)(xue),計(ji)算(suan)機科(ke)(ke)學(xue)(xue)(xue),工(gong)(gong)程,數(shu)學(xue)(xue)(xue),物(wu)理(li)。該雜志發(fa)表文章(zhang)、信件(jian)和評論以及給編(bian)輯的(de)(de)(de)(de)(de)信件(jian)、社(she)論、時(shi)事、軟件(jian)調(diao)查和專(zhuan)利信息(xi)。文章(zhang)發(fa)表在五個(ge)部分之一(yi):認知科(ke)(ke)學(xue)(xue)(xue),神(shen)(shen)經(jing)(jing)(jing)科(ke)(ke)學(xue)(xue)(xue),學(xue)(xue)(xue)習系(xi)統,數(shu)學(xue)(xue)(xue)和計(ji)算(suan)分析(xi)(xi)、工(gong)(gong)程和應用(yong)。 官網地址:

Socially aware robot navigation is gaining popularity with the increase in delivery and assistive robots. The research is further fueled by a need for socially aware navigation skills in autonomous vehicles to move safely and appropriately in spaces shared with humans. Although most of these are ground robots, drones are also entering the field. In this paper, we present a literature survey of the works on socially aware robot navigation in the past 10 years. We propose four different faceted taxonomies to navigate the literature and examine the field from four different perspectives. Through the taxonomic review, we discuss the current research directions and the extending scope of applications in various domains. Further, we put forward a list of current research opportunities and present a discussion on possible future challenges that are likely to emerge in the field.

Deep neural networks (DNNs) are widely used in various application domains such as image processing, speech recognition, and natural language processing. However, testing DNN models may be challenging due to the complexity and size of their input domain. Particularly, testing DNN models often requires generating or exploring large unlabeled datasets. In practice, DNN test oracles, which identify the correct outputs for inputs, often require expensive manual effort to label test data, possibly involving multiple experts to ensure labeling correctness. In this paper, we propose DeepGD, a black-box multi-objective test selection approach for DNN models. It reduces the cost of labeling by prioritizing the selection of test inputs with high fault revealing power from large unlabeled datasets. DeepGD not only selects test inputs with high uncertainty scores to trigger as many mispredicted inputs as possible but also maximizes the probability of revealing distinct faults in the DNN model by selecting diverse mispredicted inputs. The experimental results conducted on four widely used datasets and five DNN models show that in terms of fault-revealing ability: (1) White-box, coverage-based approaches fare poorly, (2) DeepGD outperforms existing black-box test selection approaches in terms of fault detection, and (3) DeepGD also leads to better guidance for DNN model retraining when using selected inputs to augment the training set.

Deep neural networks have achieved significant success in the last decades, but they are not well-calibrated and often produce unreliable predictions. A large number of literature relies on uncertainty quantification to evaluate the reliability of a learning model, which is particularly important for applications of out-of-distribution (OOD) detection and misclassification detection. We are interested in uncertainty quantification for interdependent node-level classification. We start our analysis based on graph posterior networks (GPNs) that optimize the uncertainty cross-entropy (UCE)-based loss function. We describe the theoretical limitations of the widely-used UCE loss. To alleviate the identified drawbacks, we propose a distance-based regularization that encourages clustered OOD nodes to remain clustered in the latent space. We conduct extensive comparison experiments on eight standard datasets and demonstrate that the proposed regularization outperforms the state-of-the-art in both OOD detection and misclassification detection.

Transformer models have been widely adopted in various domains over the last years, and especially large language models have advanced the field of AI significantly. Due to their size, the capability of these networks has increased tremendously, but this has come at the cost of a significant increase in necessary compute. Quantization is one of the most effective ways to reduce the computational time and memory consumption of neural networks. Many studies have shown, however, that modern transformer models tend to learn strong outliers in their activations, making them difficult to quantize. To retain acceptable performance, the existence of these outliers requires activations to be in higher bitwidth or the use of different numeric formats, extra fine-tuning, or other workarounds. We show that strong outliers are related to very specific behavior of attention heads that try to learn a "no-op" or just a partial update of the residual. To achieve the exact zeros needed in the attention matrix for a no-update, the input to the softmax is pushed to be larger and larger during training, causing outliers in other parts of the network. Based on these observations, we propose two simple (independent) modifications to the attention mechanism - clipped softmax and gated attention. We empirically show that models pre-trained using our methods learn significantly smaller outliers while maintaining and sometimes even improving the floating-point task performance. This enables us to quantize transformers to full INT8 quantization of the activations without any additional effort. We demonstrate the effectiveness of our methods on both language models (BERT, OPT) and vision transformers.

Storing network traffic data is key to efficient network management; however, it is becoming more challenging and costly due to the ever-increasing data transmission rates, traffic volumes, and connected devices. In this paper, we explore the use of neural architectures for network traffic compression. Specifically, we consider a network scenario with multiple measurement points in a network topology. Such measurements can be interpreted as multiple time series that exhibit spatial and temporal correlations induced by network topology, routing, or user behavior. We present \textit{Atom}, a neural traffic compression method that leverages spatial and temporal correlations present in network traffic. \textit{Atom} implements a customized spatio-temporal graph neural network design that effectively exploits both types of correlations simultaneously. The experimental results show that \textit{Atom} can outperform GZIP's compression ratios by 50\%-65\% on three real-world networks.

Recommendation systems have become popular and effective tools to help users discover their interesting items by modeling the user preference and item property based on implicit interactions (e.g., purchasing and clicking). Humans perceive the world by processing the modality signals (e.g., audio, text and image), which inspired researchers to build a recommender system that can understand and interpret data from different modalities. Those models could capture the hidden relations between different modalities and possibly recover the complementary information which can not be captured by a uni-modal approach and implicit interactions. The goal of this survey is to provide a comprehensive review of the recent research efforts on the multimodal recommendation. Specifically, it shows a clear pipeline with commonly used techniques in each step and classifies the models by the methods used. Additionally, a code framework has been designed that helps researchers new in this area to understand the principles and techniques, and easily runs the SOTA models. Our framework is located at: //github.com/enoche/MMRec

With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems are envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are not just immensely popular in the service layer applications but also have been proposed as essential enablers in many aspects of B5G networks, from IoT devices and edge computing to cloud-based infrastructures. However, most of the existing surveys in B5G security focus on the performance of AI/ML models and their accuracy, but they often overlook the accountability and trustworthiness of the models' decisions. Explainable AI (XAI) methods are promising techniques that would allow system developers to identify the internal workings of AI/ML black-box models. The goal of using XAI in the security domain of B5G is to allow the decision-making processes of the security of systems to be transparent and comprehensible to stakeholders making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy. Furthermore, we presented the lessons learned from recent efforts and future research directions on top of the currently conducted projects involving XAI.

Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand in order to enable numerous edge AI applications. This paper provides an overview of efficient deep learning methods, systems and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.

Since real-world objects and their interactions are often multi-modal and multi-typed, heterogeneous networks have been widely used as a more powerful, realistic, and generic superclass of traditional homogeneous networks (graphs). Meanwhile, representation learning (\aka~embedding) has recently been intensively studied and shown effective for various network mining and analytical tasks. In this work, we aim to provide a unified framework to deeply summarize and evaluate existing research on heterogeneous network embedding (HNE), which includes but goes beyond a normal survey. Since there has already been a broad body of HNE algorithms, as the first contribution of this work, we provide a generic paradigm for the systematic categorization and analysis over the merits of various existing HNE algorithms. Moreover, existing HNE algorithms, though mostly claimed generic, are often evaluated on different datasets. Understandable due to the application favor of HNE, such indirect comparisons largely hinder the proper attribution of improved task performance towards effective data preprocessing and novel technical design, especially considering the various ways possible to construct a heterogeneous network from real-world application data. Therefore, as the second contribution, we create four benchmark datasets with various properties regarding scale, structure, attribute/label availability, and \etc.~from different sources, towards handy and fair evaluations of HNE algorithms. As the third contribution, we carefully refactor and amend the implementations and create friendly interfaces for 13 popular HNE algorithms, and provide all-around comparisons among them over multiple tasks and experimental settings.

In many real-world network datasets such as co-authorship, co-citation, email communication, etc., relationships are complex and go beyond pairwise. Hypergraphs provide a flexible and natural modeling tool to model such complex relationships. The obvious existence of such complex relationships in many real-world networks naturaly motivates the problem of learning with hypergraphs. A popular learning paradigm is hypergraph-based semi-supervised learning (SSL) where the goal is to assign labels to initially unlabeled vertices in a hypergraph. Motivated by the fact that a graph convolutional network (GCN) has been effective for graph-based SSL, we propose HyperGCN, a novel GCN for SSL on attributed hypergraphs. Additionally, we show how HyperGCN can be used as a learning-based approach for combinatorial optimisation on NP-hard hypergraph problems. We demonstrate HyperGCN's effectiveness through detailed experimentation on real-world hypergraphs.

北京阿比特科技有限公司