亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

主題: Directions for Explainable Knowledge-Enabled Systems

摘要: 數十年來,人們對可解釋人工智能領域的興趣不斷增長,并且近年來這種興趣正在加速增長。隨著人工智能模型變得更加復雜,并且通常更加不透明,并且隨著復雜的機器學習技術的結合,可解釋性變得越來越重要。最近,研究人員一直在研究和解決以用戶為中心的可解釋性,尋找解釋以考慮可信度,可理解性,顯性出處和上下文意識。在本章中,我們將利用對人工智能及其密切相關領域的解釋性文獻的調查,并利用過去的努力來生成一組解釋類型,我們認為這些類型反映了當今人工智能應用對解釋的擴展需求。我們定義每種類型,并提供一個示例問題,以激發對這種解釋方式的需求。我們認為,這組解釋類型將有助于未來的系統設計人員生成需求并確定其優先級,并進一步幫助生成更符合用戶和情況需求的解釋。

付費5元查看完整內容

相關內容

 是研究、開發用于模擬、延伸和擴展人的智能的理論、方法、技術及應用系統的一門新的技術科學。 人工智能是計算機科學的一個分支。

主題: Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey

摘要: 如今,深度神經網絡已廣泛應用于對醫療至關重要的任務關鍵型系統,例如醫療保健,自動駕駛汽車和軍事領域,這些系統對人類生活產生直接影響。然而,深層神經網絡的黑匣子性質挑戰了其在使用中的關鍵任務應用,引發了引起信任不足的道德和司法問題。可解釋的人工智能(XAI)是人工智能(AI)的一個領域,它促進了一系列工具,技術和算法的產生,這些工具,技術和算法可以生成對AI決策的高質量,可解釋,直觀,人類可理解的解釋。除了提供有關深度學習當前XAI格局的整體視圖之外,本文還提供了開創性工作的數學總結。我們首先提出分類法,然后根據它們的解釋范圍,算法背后的方法,解釋級別或用法對XAI技術進行分類,這有助于建立可信賴,可解釋且自解釋的深度學習模型。然后,我們描述了XAI研究中使用的主要原理,并介紹了2007年至2020年XAI界標研究的歷史時間表。在詳細解釋了每種算法和方法之后,我們評估了八種XAI算法對圖像數據生成的解釋圖,討論了其局限性方法,并提供潛在的未來方向來改進XAI評估。

付費5元查看完整內容

題目: Knowledge Graph Embeddings and Explainable AI

摘要: 知識圖譜嵌入是一種廣泛采用的知識表示方法,它將實體和關系嵌入到向量空間中。在這一章中,我們通過解釋知識圖譜嵌入是什么,如何生成它們以及如何對它們進行評估,向讀者介紹知識圖譜嵌入的概念。我們總結了這一領域的最新研究成果,對向量空間中表示知識的方法進行了介紹。在知識表示方面,我們考慮了可解釋性問題,并討論了通過知識圖譜嵌入來解釋預測的模型和方法。

付費5元查看完整內容

【簡介】近些年來,可解釋的人工智能受到了越來越多的關注。隨著人工智能模型變得越來越復雜和不透明,可解釋性變得越來越重要。最近,研究人員一直在以用戶為中心研究和處理可解釋性,尋找可信任、可理解、明確的來源和上下文感知的可解釋性。在這篇論文中,我們通過調研人工智能和相關領域中有關可解釋性的文獻,并利用過去的相關研究生成了一系列的可解釋類型。我們定義每種類型,并提供一個示例問題,來闡述對這種解釋方式的需求。我們相信,這一系列的解釋類型將有助于未來的系統設計人員獲得可靠的需求和確定各種需求的優先級,并進一步幫助生成能夠更好地符合用戶和情景需求的解釋。

介紹

人工智能(AI)領域已經從單純的基于符號和邏輯的專家系統發展到使用統計和邏輯推理技術的混合系統。可解釋性人工智能的進展與人工智能方法的發展緊密相關,例如我們在早期的論文“可解釋的知識支持系統的基礎”中所涉及的類別,涵蓋了專家系統、語義web方法、認知助手和機器學習方法。我們注意到這些方法主要處理可解釋性的特定方面。例如,由專家系統產生的解釋主要用于提供推理所需的痕跡、來源和理由。這些由認知助理提供的模型能夠調整它們的形式以適應用戶的需求,并且在機器學習和專家系統領域,解釋為模型的功能提供了一種“直覺”。

付費5元查看完整內容

題目: Foundations of Explainable Knowledge-Enabled Systems

摘要:

自從人工智能時代以來,可解釋性就一直是重要的目標。 目前為止,已經有幾種產生解釋的方法被提出。 但是,這些方法中有許多都與當時的人工智能系統的能力緊密相關。 隨著有時在關鍵環境中啟用AI的系統的普及,有必要讓最終用戶和決策者對它們進行解釋。 我們將介紹可解釋的人工智能系統的歷史概況,重點是知識支持的系統,涵蓋專家系統,認知助手,語義應用和機器學習領域。 此外,借鑒過去的方法的優勢,并找出使解釋以用戶和上下文為中心所需要的空白,我們提出了新的解釋定義和可解釋的知識支持系統。

付費5元查看完整內容

Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users' and situational needs.

題目: A Survey on Knowledge Graph-Based Recommender Systems

摘要:

為了解決信息爆炸問題,提高用戶在各種在線應用中的體驗,人們開發了推薦系統來模擬用戶的偏好。盡管人們已經為更個性化的推薦做了很多努力,但是推薦系統仍然面臨著一些挑戰,如數據稀疏和冷啟動。近年來,以知識圖為輔助信息的推薦生成引起了人們的極大興趣。這種方法不僅可以緩解上述問題,使推薦更加準確,而且可以為推薦項目提供解釋。本文對基于知識圖的推薦系統進行了系統的研究。我們收集了最近在這一領域發表的論文,并從兩個角度對其進行了總結。一方面,我們通過研究論文如何利用知識圖進行精確和可解釋的推薦來研究所提出的算法。另一方面,我們介紹了這些工作中使用的數據集。最后,提出了該領域的幾個潛在研究方向。

付費5元查看完整內容

To solve the information explosion problem and enhance user experience in various online applications, recommender systems have been developed to model users preferences. Although numerous efforts have been made toward more personalized recommendations, recommender systems still suffer from several challenges, such as data sparsity and cold start. In recent years, generating recommendations with the knowledge graph as side information has attracted considerable interest. Such an approach can not only alleviate the abovementioned issues for a more accurate recommendation, but also provide explanations for recommended items. In this paper, we conduct a systematical survey of knowledge graph-based recommender systems. We collect recently published papers in this field and summarize them from two perspectives. On the one hand, we investigate the proposed algorithms by focusing on how the papers utilize the knowledge graph for accurate and explainable recommendation. On the other hand, we introduce datasets used in these works. Finally, we propose several potential research directions in this field.

可解釋推薦嘗試開發模型,不僅生成高質量的推薦,而且生成直觀的解釋。解釋可以是事后的,也可以直接來自可解釋的模型(在某些上下文中也稱為可解釋的或透明的模型)。可解釋推薦嘗試解決為什么的問題:通過向用戶或系統設計者提供解釋,它幫助人們理解為什么算法推薦某些項目,而人既可以是用戶,也可以是系統設計者。可解釋推薦有助于提高推薦系統的透明度、說服力、有效性、可信度和滿意度。

在這次調查中,我們回顧了在2019年或之前可解釋的建議的工作。我們首先通過將推薦問題劃分為5W來強調可解釋推薦在推薦系統研究中的地位。什么,什么時候,誰,在哪里,為什么。然后,我們從三個角度對可解釋推薦進行了全面的調查:1)我們提供了可解釋推薦的研究時間軸,包括早期的用戶研究方法和最近的基于模型的方法。2)我們提供了一個二維分類法來對現有的可解釋推薦研究進行分類:一個維度是解釋的信息源(或顯示樣式),另一個維度是生成可解釋推薦的算法機制。3)我們總結了可解釋推薦如何應用于不同的推薦任務,如產品推薦、社交推薦和POI推薦。我們還專門用一節來討論更廣泛的IR和AI/ML研究中的解釋視角。最后,我們討論了未來可解釋推薦研究領域的發展方向。

付費5元查看完整內容

Explainable recommendation attempts to develop models that generate not only high-quality recommendations but also intuitive explanations. The explanations may either be post-hoc or directly come from an explainable model (also called interpretable or transparent model in some context). Explainable recommendation tries to address the problem of why: by providing explanations to users or system designers, it helps humans to understand why certain items are recommended by the algorithm, where the human can either be users or system designers. Explainable recommendation helps to improve the transparency, persuasiveness, effectiveness, trustworthiness, and satisfaction of recommendation systems. In this survey, we review works on explainable recommendation in or before the year of 2019. We first highlight the position of explainable recommendation in recommender system research by categorizing recommendation problems into the 5W, i.e., what, when, who, where, and why. We then conduct a comprehensive survey of explainable recommendation on three perspectives: 1) We provide a chronological research timeline of explainable recommendation, including user study approaches in the early years and more recent model-based approaches. 2) We provide a two-dimensional taxonomy to classify existing explainable recommendation research: one dimension is the information source (or display style) of the explanations, and the other dimension is the algorithmic mechanism to generate explainable recommendations. 3) We summarize how explainable recommendation applies to different recommendation tasks, such as product recommendation, social recommendation, and POI recommendation. We also devote a section to discuss the explanation perspectives in broader IR and AI/ML research. We end the survey by discussing potential future directions to promote the explainable recommendation research area and beyond.

Images account for a significant part of user decisions in many application scenarios, such as product images in e-commerce, or user image posts in social networks. It is intuitive that user preferences on the visual patterns of image (e.g., hue, texture, color, etc) can be highly personalized, and this provides us with highly discriminative features to make personalized recommendations. Previous work that takes advantage of images for recommendation usually transforms the images into latent representation vectors, which are adopted by a recommendation component to assist personalized user/item profiling and recommendation. However, such vectors are hardly useful in terms of providing visual explanations to users about why a particular item is recommended, and thus weakens the explainability of recommendation systems. As a step towards explainable recommendation models, we propose visually explainable recommendation based on attentive neural networks to model the user attention on images, under the supervision of both implicit feedback and textual reviews. By this, we can not only provide recommendation results to the users, but also tell the users why an item is recommended by providing intuitive visual highlights in a personalized manner. Experimental results show that our models are not only able to improve the recommendation performance, but also can provide persuasive visual explanations for the users to take the recommendations.

北京阿比特科技有限公司