亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A hybrid model involves the cooperation of an interpretable model and a complex black box. At inference, any input of the hybrid model is assigned to either its interpretable or complex component based on a gating mechanism. The advantages of such models over classical ones are two-fold: 1) They grant users precise control over the level of transparency of the system and 2) They can potentially perform better than a standalone black box since redirecting some of the inputs to an interpretable model implicitly acts as regularization. Still, despite their high potential, hybrid models remain under-studied in the interpretability/explainability literature. In this paper, we remedy this fact by presenting a thorough investigation of such models from three perspectives: Theory, Taxonomy, and Methods. First, we explore the theory behind the generalization of hybrid models from the Probably-Approximately-Correct (PAC) perspective. A consequence of our PAC guarantee is the existence of a sweet spot for the optimal transparency of the system. When such a sweet spot is attained, a hybrid model can potentially perform better than a standalone black box. Secondly, we provide a general taxonomy for the different ways of training hybrid models: the Post-Black-Box and Pre-Black-Box paradigms. These approaches differ in the order in which the interpretable and complex components are trained. We show where the state-of-the-art hybrid models Hybrid-Rule-Set and Companion-Rule-List fall in this taxonomy. Thirdly, we implement the two paradigms in a single method: HybridCORELS, which extends the CORELS algorithm to hybrid modeling. By leveraging CORELS, HybridCORELS provides a certificate of optimality of its interpretable component and precise control over transparency. We finally show empirically that HybridCORELS is competitive with existing hybrid models, and performs just as well as a standalone black box (or even better) while being partly transparent.

相關內容

分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)學是(shi)(shi)(shi)(shi)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)的(de)(de)(de)實(shi)踐和(he)科學。Wikipedia類(lei)(lei)(lei)別說明了(le)一(yi)種(zhong)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa),可(ke)以通過自動方(fang)式提取(qu)Wikipedia類(lei)(lei)(lei)別的(de)(de)(de)完整分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)。截至2009年,已經證明,可(ke)以使用(yong)人工構(gou)(gou)(gou)建的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)(例(li)如像WordNet這(zhe)樣(yang)的(de)(de)(de)計(ji)算詞(ci)典的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa))來改(gai)進(jin)和(he)重組Wikipedia類(lei)(lei)(lei)別分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)。 從廣義上(shang)講,分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)還適(shi)用(yong)于除父(fu)(fu)子(zi)層(ceng)次結(jie)構(gou)(gou)(gou)以外的(de)(de)(de)關系方(fang)案,例(li)如網(wang)絡結(jie)構(gou)(gou)(gou)。然后分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)可(ke)能(neng)包括(kuo)有多父(fu)(fu)母(mu)(mu)的(de)(de)(de)單(dan)身孩(hai)子(zi),例(li)如,“汽車”可(ke)能(neng)與父(fu)(fu)母(mu)(mu)雙方(fang)一(yi)起(qi)出(chu)現“車輛”和(he)“鋼結(jie)構(gou)(gou)(gou)”;但是(shi)(shi)(shi)(shi)對(dui)某些人而言,這(zhe)僅意味著“汽車”是(shi)(shi)(shi)(shi)幾種(zhong)不同分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)的(de)(de)(de)一(yi)部(bu)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)。分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)也可(ke)能(neng)只是(shi)(shi)(shi)(shi)將事物組織成組,或者是(shi)(shi)(shi)(shi)按字母(mu)(mu)順(shun)序排列的(de)(de)(de)列表;但是(shi)(shi)(shi)(shi)在(zai)(zai)這(zhe)里(li),術語詞(ci)匯更(geng)合適(shi)。在(zai)(zai)知識(shi)管(guan)理中(zhong)的(de)(de)(de)當前用(yong)法(fa)中(zhong),分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)被認(ren)為比本(ben)體(ti)論窄,因為本(ben)體(ti)論應用(yong)了(le)各(ge)種(zhong)各(ge)樣(yang)的(de)(de)(de)關系類(lei)(lei)(lei)型(xing)。 在(zai)(zai)數學上(shang),分(fen)(fen)(fen)(fen)(fen)(fen)(fen)層(ceng)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)法(fa)是(shi)(shi)(shi)(shi)給定對(dui)象集的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)樹(shu)結(jie)構(gou)(gou)(gou)。該結(jie)構(gou)(gou)(gou)的(de)(de)(de)頂部(bu)是(shi)(shi)(shi)(shi)適(shi)用(yong)于所有對(dui)象的(de)(de)(de)單(dan)個分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei),即根節(jie)(jie)點。此根下的(de)(de)(de)節(jie)(jie)點是(shi)(shi)(shi)(shi)更(geng)具體(ti)的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei),適(shi)用(yong)于總(zong)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)對(dui)象集的(de)(de)(de)子(zi)集。推(tui)理的(de)(de)(de)進(jin)展(zhan)從一(yi)般到更(geng)具體(ti)。

知識薈萃

精品入(ru)門和(he)進階教程(cheng)、論文和(he)代碼整理等

更多

查看相(xiang)關(guan)VIP內容(rong)、論文(wen)、資訊等(deng)

The field of machine learning (ML) has gained widespread adoption, leading to a significant demand for adapting ML to specific scenarios, which is yet expensive and non-trivial. The predominant approaches towards the automation of solving ML tasks (e.g., AutoML) are often time consuming and hard to understand for human developers. In contrast, though human engineers have the incredible ability to understand tasks and reason about solutions, their experience and knowledge are often sparse and difficult to utilize by quantitative approaches. In this paper, we aim to bridge the gap between machine intelligence and human knowledge by introducing a novel framework MLCopilot, which leverages the state-of-the-art LLMs to develop ML solutions for novel tasks. We showcase the possibility of extending the capability of LLMs to comprehend structured inputs and perform thorough reasoning for solving novel ML tasks. And we find that, after some dedicated design, the LLM can (i) observe from the existing experiences of ML tasks and (ii) reason effectively to deliver promising results for new tasks. The solution generated can be used directly to achieve high levels of competitiveness.

Intelligent learning diagnosis is a critical engine of intelligent tutoring systems, which aims to estimate learners' current knowledge mastery status and predict their future learning performance. The significant challenge with traditional learning diagnosis methods is the inability to balance diagnostic accuracy and interpretability. Although the existing psychometric-based learning diagnosis methods provide some domain interpretation through cognitive parameters, they have insufficient modeling capability with a shallow structure for large-scale learning data. While the deep learning-based learning diagnosis methods have improved the accuracy of learning performance prediction, their inherent black-box properties lead to a lack of interpretability, making their results untrustworthy for educational applications. To settle the above problem, the proposed unified interpretable intelligent learning diagnosis framework, which benefits from the powerful representation learning ability of deep learning and the interpretability of psychometrics, achieves a better performance of learning prediction and provides interpretability from three aspects: cognitive parameters, learner-resource response network, and weights of self-attention mechanism. Within the proposed framework, this paper presents a two-channel learning diagnosis mechanism LDM-ID as well as a three-channel learning diagnosis mechanism LDM-HMI. Experiments on two real-world datasets and a simulation dataset show that our method has higher accuracy in predicting learners' performances compared with the state-of-the-art models, and can provide valuable educational interpretability for applications such as precise learning resource recommendation and personalized learning tutoring in intelligent tutoring systems.

Value decomposition is widely used in cooperative multi-agent reinforcement learning, however, its implicit credit assignment mechanism is not yet fully understood due to black-box networks. In this work, we study an interpretable value decomposition framework via the family of generalized additive models. We present a novel method, named Neural Attention Additive Q-learning~(N$\text{A}^\text{2}$Q), providing inherent intelligibility of collaboration behavior. N$\text{A}^\text{2}$Q can explicitly factorize the optimal joint policy induced by enriching shape functions to model all possible coalitions of agents into individual policies. Moreover, we construct identity semantics to promote estimating credits together with the global state and individual value functions, where local semantic masks help us diagnose whether each agent captures relevant-task information. Extensive experiments show that N$\text{A}^\text{2}$Q consistently achieves superior performance compared to different state-of-the-art methods on all challenging tasks, while yielding human-like interpretability.

In recent years, Graph Neural Networks have reported outstanding performance in tasks like community detection, molecule classification and link prediction. However, the black-box nature of these models prevents their application in domains like health and finance, where understanding the models' decisions is essential. Counterfactual Explanations (CE) provide these understandings through examples. Moreover, the literature on CE is flourishing with novel explanation methods which are tailored to graph learning. In this survey, we analyse the existing Graph Counterfactual Explanation methods, by providing the reader with an organisation of the literature according to a uniform formal notation for definitions, datasets, and metrics, thus, simplifying potential comparisons w.r.t to the method advantages and disadvantages. We discussed seven methods and sixteen synthetic and real datasets providing details on the possible generation strategies. We highlight the most common evaluation strategies and formalise nine of the metrics used in the literature. We first introduce the evaluation framework GRETEL and how it is possible to extend and use it while providing a further dimension of comparison encompassing reproducibility aspects. Finally, we provide a discussion on how counterfactual explanation interplays with privacy and fairness, before delving into open challenges and future works.

Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand in order to enable numerous edge AI applications. This paper provides an overview of efficient deep learning methods, systems and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.

Graph Neural Networks (GNNs) have received considerable attention on graph-structured data learning for a wide variety of tasks. The well-designed propagation mechanism which has been demonstrated effective is the most fundamental part of GNNs. Although most of GNNs basically follow a message passing manner, litter effort has been made to discover and analyze their essential relations. In this paper, we establish a surprising connection between different propagation mechanisms with a unified optimization problem, showing that despite the proliferation of various GNNs, in fact, their proposed propagation mechanisms are the optimal solution optimizing a feature fitting function over a wide class of graph kernels with a graph regularization term. Our proposed unified optimization framework, summarizing the commonalities between several of the most representative GNNs, not only provides a macroscopic view on surveying the relations between different GNNs, but also further opens up new opportunities for flexibly designing new GNNs. With the proposed framework, we discover that existing works usually utilize naive graph convolutional kernels for feature fitting function, and we further develop two novel objective functions considering adjustable graph kernels showing low-pass or high-pass filtering capabilities respectively. Moreover, we provide the convergence proofs and expressive power comparisons for the proposed models. Extensive experiments on benchmark datasets clearly show that the proposed GNNs not only outperform the state-of-the-art methods but also have good ability to alleviate over-smoothing, and further verify the feasibility for designing GNNs with our unified optimization framework.

Machine learning plays a role in many deployed decision systems, often in ways that are difficult or impossible to understand by human stakeholders. Explaining, in a human-understandable way, the relationship between the input and output of machine learning models is essential to the development of trustworthy machine-learning-based systems. A burgeoning body of research seeks to define the goals and methods of explainability in machine learning. In this paper, we seek to review and categorize research on counterfactual explanations, a specific class of explanation that provides a link between what could have happened had input to a model been changed in a particular way. Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries, making them appealing to fielded systems in high-impact areas such as finance and healthcare. Thus, we design a rubric with desirable properties of counterfactual explanation algorithms and comprehensively evaluate all currently-proposed algorithms against that rubric. Our rubric provides easy comparison and comprehension of the advantages and disadvantages of different approaches and serves as an introduction to major research themes in this field. We also identify gaps and discuss promising research directions in the space of counterfactual explainability.

Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related, and what common concepts can be used to evaluate them. We aim to address these concerns by defining interpretability in the context of machine learning and introducing the Predictive, Descriptive, Relevant (PDR) framework for discussing interpretations. The PDR framework provides three overarching desiderata for evaluation: predictive accuracy, descriptive accuracy and relevancy, with relevancy judged relative to a human audience. Moreover, to help manage the deluge of interpretation methods, we introduce a categorization of existing techniques into model-based and post-hoc categories, with sub-groups including sparsity, modularity and simulatability. To demonstrate how practitioners can use the PDR framework to evaluate and understand interpretations, we provide numerous real-world examples. These examples highlight the often under-appreciated role played by human audiences in discussions of interpretability. Finally, based on our framework, we discuss limitations of existing methods and directions for future work. We hope that this work will provide a common vocabulary that will make it easier for both practitioners and researchers to discuss and choose from the full range of interpretation methods.

Lots of learning tasks require dealing with graph data which contains rich relation information among elements. Modeling physics system, learning molecular fingerprints, predicting protein interface, and classifying diseases require that a model to learn from graph inputs. In other domains such as learning from non-structural data like texts and images, reasoning on extracted structures, like the dependency tree of sentences and the scene graph of images, is an important research topic which also needs graph reasoning models. Graph neural networks (GNNs) are connectionist models that capture the dependence of graphs via message passing between the nodes of graphs. Unlike standard neural networks, graph neural networks retain a state that can represent information from its neighborhood with an arbitrary depth. Although the primitive graph neural networks have been found difficult to train for a fixed point, recent advances in network architectures, optimization techniques, and parallel computation have enabled successful learning with them. In recent years, systems based on graph convolutional network (GCN) and gated graph neural network (GGNN) have demonstrated ground-breaking performance on many tasks mentioned above. In this survey, we provide a detailed review over existing graph neural network models, systematically categorize the applications, and propose four open problems for future research.

In structure learning, the output is generally a structure that is used as supervision information to achieve good performance. Considering the interpretation of deep learning models has raised extended attention these years, it will be beneficial if we can learn an interpretable structure from deep learning models. In this paper, we focus on Recurrent Neural Networks (RNNs) whose inner mechanism is still not clearly understood. We find that Finite State Automaton (FSA) that processes sequential data has more interpretable inner mechanism and can be learned from RNNs as the interpretable structure. We propose two methods to learn FSA from RNN based on two different clustering methods. We first give the graphical illustration of FSA for human beings to follow, which shows the interpretability. From the FSA's point of view, we then analyze how the performance of RNNs are affected by the number of gates, as well as the semantic meaning behind the transition of numerical hidden states. Our results suggest that RNNs with simple gated structure such as Minimal Gated Unit (MGU) is more desirable and the transitions in FSA leading to specific classification result are associated with corresponding words which are understandable by human beings.

北京阿比特科技有限公司