亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Hybrid human-ML systems increasingly make consequential decisions in a wide range of domains. These systems are often introduced with the expectation that the combined human-ML system will achieve complementary performance, that is, the combined decision-making system will be an improvement compared with either decision-making agent in isolation. However, empirical results have been mixed, and existing research rarely articulates the sources and mechanisms by which complementary performance is expected to arise. Our goal in this work is to provide conceptual tools to advance the way researchers reason and communicate about human-ML complementarity. Drawing upon prior literature in human psychology, machine learning, and human-computer interaction, we propose a taxonomy characterizing distinct ways in which human and ML-based decision-making can differ. In doing so, we conceptually map potential mechanisms by which combining human and ML decision-making may yield complementary performance, developing a language for the research community to reason about design of hybrid systems in any decision-making domain. To illustrate how our taxonomy can be used to investigate complementarity, we provide a mathematical aggregation framework to examine enabling conditions for complementarity. Through synthetic simulations, we demonstrate how this framework can be used to explore specific aspects of our taxonomy and shed light on the optimal mechanisms for combining human-ML judgments

相關內容

分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)學是(shi)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)的(de)(de)(de)實踐和(he)科學。Wikipedia類(lei)(lei)(lei)(lei)別說明(ming)(ming)了一(yi)(yi)種分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)法(fa)(fa),可(ke)以通過(guo)自動方式提取Wikipedia類(lei)(lei)(lei)(lei)別的(de)(de)(de)完(wan)整分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)法(fa)(fa)。截至2009年(nian),已經證(zheng)明(ming)(ming),可(ke)以使用(yong)人工(gong)構(gou)(gou)建的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)法(fa)(fa)(例如(ru)像WordNet這樣的(de)(de)(de)計(ji)算(suan)詞典的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)法(fa)(fa))來改進(jin)和(he)重(zhong)組(zu)Wikipedia類(lei)(lei)(lei)(lei)別分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)法(fa)(fa)。 從廣義上(shang)講(jiang),分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)法(fa)(fa)還適(shi)(shi)用(yong)于除(chu)父(fu)子(zi)(zi)層(ceng)次結(jie)構(gou)(gou)以外的(de)(de)(de)關(guan)系方案,例如(ru)網(wang)絡結(jie)構(gou)(gou)。然后分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)法(fa)(fa)可(ke)能(neng)包括(kuo)有多父(fu)母(mu)的(de)(de)(de)單身孩子(zi)(zi),例如(ru),“汽車”可(ke)能(neng)與父(fu)母(mu)雙方一(yi)(yi)起(qi)出現“車輛”和(he)“鋼結(jie)構(gou)(gou)”;但(dan)是(shi)對某些人而言,這僅意味著“汽車”是(shi)幾種不同分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)法(fa)(fa)的(de)(de)(de)一(yi)(yi)部分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)。分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)法(fa)(fa)也可(ke)能(neng)只是(shi)將事物組(zu)織成組(zu),或者(zhe)是(shi)按字母(mu)順序排列(lie)(lie)的(de)(de)(de)列(lie)(lie)表;但(dan)是(shi)在這里,術(shu)語(yu)詞匯更(geng)合適(shi)(shi)。在知識管理中(zhong)的(de)(de)(de)當前用(yong)法(fa)(fa)中(zhong),分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)法(fa)(fa)被認為比本體(ti)(ti)論窄,因為本體(ti)(ti)論應用(yong)了各種各樣的(de)(de)(de)關(guan)系類(lei)(lei)(lei)(lei)型。 在數(shu)學上(shang),分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)層(ceng)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)法(fa)(fa)是(shi)給定對象集的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)樹結(jie)構(gou)(gou)。該結(jie)構(gou)(gou)的(de)(de)(de)頂部是(shi)適(shi)(shi)用(yong)于所有對象的(de)(de)(de)單個分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei),即(ji)根節點(dian)(dian)。此根下(xia)的(de)(de)(de)節點(dian)(dian)是(shi)更(geng)具體(ti)(ti)的(de)(de)(de)分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei),適(shi)(shi)用(yong)于總分(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)(fen)類(lei)(lei)(lei)(lei)對象集的(de)(de)(de)子(zi)(zi)集。推理的(de)(de)(de)進(jin)展從一(yi)(yi)般到(dao)更(geng)具體(ti)(ti)。

知識薈萃

精品(pin)入門和進階教(jiao)程、論文和代碼整(zheng)理等

更多

查看相關VIP內容(rong)、論(lun)文(wen)、資(zi)訊等

Deep reinforcement learning (RL) algorithms enable the development of fully autonomous agents that can interact with the environment. Brain-computer interface (BCI) systems decipher human implicit brain signals regardless of the explicit environment. In this study, we integrated deep RL and BCI to improve beneficial human interventions in autonomous systems and the performance in decoding brain activities by considering environmental factors. Shared autonomy was allowed between the action command decoded from the electroencephalography (EEG) of the human agent and the action generated from the twin delayed DDPG (TD3) agent for a given environment. Our proposed copilot control scheme with a full blocker (Co-FB) significantly outperformed the individual EEG (EEG-NB) or TD3 control. The Co-FB model achieved a higher target approaching score, lower failure rate, and lower human workload than the EEG-NB model. The Co-FB control scheme had a higher invisible target score and level of allowed human intervention than the TD3 model. We also proposed a disparity d-index to evaluate the effect of contradicting agent decisions on the control accuracy and authority of the copilot model. We found a significant correlation between the control authority of the TD3 agent and the performance improvement of human EEG classification with respect to the d-index. We also observed that shifting control authority to the TD3 agent improved performance when BCI decoding was not optimal. These findings indicate that the copilot system can effectively handle complex environments and that BCI performance can be improved by considering environmental factors. Future work should employ continuous action space and different multi-agent approaches to evaluate copilot performance.

Social recommendations have been widely adopted in substantial domains. Recently, graph neural networks (GNN) have been employed in recommender systems due to their success in graph representation learning. However, dealing with the dynamic property of social network data is a challenge. This research presents a novel method that provides social recommendations by incorporating the dynamic property of social network data in a heterogeneous graph. The model aims to capture user preference over time without going through the complexities of a dynamic graph by adding period nodes to define users' long-term and short-term preferences and aggregating assigned edge weights. The model is applied to real-world data to argue its superior performance. Promising results demonstrate the effectiveness of this model.

In this study, we establish that deep neural networks employing ReLU and ReLU$^2$ activation functions are capable of representing Lagrange finite element functions of any order on simplicial meshes across arbitrary dimensions. We introduce a novel global formulation of the basis functions for Lagrange elements, grounded in a geometric decomposition of these elements and leveraging two essential properties of high-dimensional simplicial meshes and barycentric coordinate functions. This representation theory facilitates a natural approximation result for such deep neural networks. Our findings present the first demonstration of how deep neural networks can systematically generate general continuous piecewise polynomial functions.

The fusion of causal models with deep learning introducing increasingly intricate data sets, such as the causal associations within images or between textual components, has surfaced as a focal research area. Nonetheless, the broadening of original causal concepts and theories to such complex, non-statistical data has been met with serious challenges. In response, our study proposes redefinitions of causal data into three distinct categories from the standpoint of causal structure and representation: definite data, semi-definite data, and indefinite data. Definite data chiefly pertains to statistical data used in conventional causal scenarios, while semi-definite data refers to a spectrum of data formats germane to deep learning, including time-series, images, text, and others. Indefinite data is an emergent research sphere inferred from the progression of data forms by us. To comprehensively present these three data paradigms, we elaborate on their formal definitions, differences manifested in datasets, resolution pathways, and development of research. We summarize key tasks and achievements pertaining to definite and semi-definite data from myriad research undertakings, present a roadmap for indefinite data, beginning with its current research conundrums. Lastly, we classify and scrutinize the key datasets presently utilized within these three paradigms.

Face recognition technology has advanced significantly in recent years due largely to the availability of large and increasingly complex training datasets for use in deep learning models. These datasets, however, typically comprise images scraped from news sites or social media platforms and, therefore, have limited utility in more advanced security, forensics, and military applications. These applications require lower resolution, longer ranges, and elevated viewpoints. To meet these critical needs, we collected and curated the first and second subsets of a large multi-modal biometric dataset designed for use in the research and development (R&D) of biometric recognition technologies under extremely challenging conditions. Thus far, the dataset includes more than 350,000 still images and over 1,300 hours of video footage of approximately 1,000 subjects. To collect this data, we used Nikon DSLR cameras, a variety of commercial surveillance cameras, specialized long-rage R&D cameras, and Group 1 and Group 2 UAV platforms. The goal is to support the development of algorithms capable of accurately recognizing people at ranges up to 1,000 m and from high angles of elevation. These advances will include improvements to the state of the art in face recognition and will support new research in the area of whole-body recognition using methods based on gait and anthropometry. This paper describes methods used to collect and curate the dataset, and the dataset's characteristics at the current stage.

Autonomic computing investigates how systems can achieve (user) specified control outcomes on their own, without the intervention of a human operator. Autonomic computing fundamentals have been substantially influenced by those of control theory for closed and open-loop systems. In practice, complex systems may exhibit a number of concurrent and inter-dependent control loops. Despite research into autonomic models for managing computer resources, ranging from individual resources (e.g., web servers) to a resource ensemble (e.g., multiple resources within a data center), research into integrating Artificial Intelligence (AI) and Machine Learning (ML) to improve resource autonomy and performance at scale continues to be a fundamental challenge. The integration of AI/ML to achieve such autonomic and self-management of systems can be achieved at different levels of granularity, from full to human-in-the-loop automation. In this article, leading academics, researchers, practitioners, engineers, and scientists in the fields of cloud computing, AI/ML, and quantum computing join to discuss current research and potential future directions for these fields. Further, we discuss challenges and opportunities for leveraging AI and ML in next generation computing for emerging computing paradigms, including cloud, fog, edge, serverless and quantum computing environments.

Leveraging datasets available to learn a model with high generalization ability to unseen domains is important for computer vision, especially when the unseen domain's annotated data are unavailable. We study a novel and practical problem of Open Domain Generalization (OpenDG), which learns from different source domains to achieve high performance on an unknown target domain, where the distributions and label sets of each individual source domain and the target domain can be different. The problem can be generally applied to diverse source domains and widely applicable to real-world applications. We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations. We augment domains on both feature-level by a new Dirichlet mixup and label-level by distilled soft-labeling, which complements each domain with missing classes and other domain knowledge. We conduct meta-learning over domains by designing new meta-learning tasks and losses to preserve domain unique knowledge and generalize knowledge across domains simultaneously. Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.

Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

User engagement is a critical metric for evaluating the quality of open-domain dialogue systems. Prior work has focused on conversation-level engagement by using heuristically constructed features such as the number of turns and the total time of the conversation. In this paper, we investigate the possibility and efficacy of estimating utterance-level engagement and define a novel metric, {\em predictive engagement}, for automatic evaluation of open-domain dialogue systems. Our experiments demonstrate that (1) human annotators have high agreement on assessing utterance-level engagement scores; (2) conversation-level engagement scores can be predicted from properly aggregated utterance-level engagement scores. Furthermore, we show that the utterance-level engagement scores can be learned from data. These scores can improve automatic evaluation metrics for open-domain dialogue systems, as shown by correlation with human judgements. This suggests that predictive engagement can be used as a real-time feedback for training better dialogue models.

北京阿比特科技有限公司