亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Observation--Hypothesis--Prediction--Experimentation loop paradigm for scientific research has been practiced by researchers for years towards scientific discoveries. However, with data explosion in both mega-scale and milli-scale scientific research, it has been sometimes very difficult to manually analyze the data and propose new hypothesis to drive the cycle for scientific discovery. In this paper, we discuss the role of Explainable AI in scientific discovery process by demonstrating an Explainable AI-based paradigm for science discovery. The key is to use Explainable AI to help derive data or model interpretations as well as scientific discoveries or insights. We show how computational and data-intensive methodology -- together with experimental and theoretical methodology -- can be seamlessly integrated for scientific research. To demonstrate the AI-based science discovery process, and to pay our respect to some of the greatest minds in human history, we show how Kepler's laws of planetary motion and the Newton's law of universal gravitation can be rediscovered by (Explainable) AI based on Tycho Brahe's astronomical observation data, whose works were leading the scientific revolution in the 16-17th century. This work also highlights the important role of Explainable AI (as compared to Blackbox AI) in science discovery to help humans prevent or better prepare for the possible technological singularity that may happen in the future.

相關內容

 Processing 是一門開源編程語言和與之配套的集成開發環境(IDE)的名稱。Processing 在電子藝術和視覺設計社區被用來教授編程基礎,并運用于大量的新媒體和互動藝術作品中。

We show that density models describing multiple observables with (i) hard boundaries and (ii) dependence on external parameters may be created using an auto-regressive Gaussian mixture model. The model is designed to capture how observable spectra are deformed by hypothesis variations, and is made more expressive by projecting data onto a configurable latent space. It may be used as a statistical model for scientific discovery in interpreting experimental observations, for example when constraining the parameters of a physical model or tuning simulation parameters according to calibration data. The model may also be sampled for use within a Monte Carlo simulation chain, or used to estimate likelihood ratios for event classification. The method is demonstrated on simulated high-energy particle physics data considering the anomalous electroweak production of a $Z$ boson in association with a dijet system at the Large Hadron Collider, and the accuracy of inference is tested using a realistic toy example. The developed methods are domain agnostic; they may be used within any field to perform simulation or inference where a dataset consisting of many real-valued observables has conditional dependence on external parameters.

Researchers increasingly explore deploying brain-computer interfaces (BCIs) for able-bodied users, with the motivation of accessing mental states more directly than allowed by existing body-mediated interaction. This motivation seems to contradict the long-standing HCI emphasis on embodiment, namely the general claim that the body is crucial for cognition. This paper addresses this apparent contradiction through a review of insights from embodied cognition and interaction. We first critically examine the recent interest in BCIs and identify the extent cognition in the brain is integrated with the wider body as a central concern for research. We then define the implications of an integrated view of cognition for interface design and evaluation. A counterintuitive conclusion we draw is that embodiment per se should not imply a preference for body movements over brain signals. Yet it can nevertheless guide research by 1) providing body-grounded explanations for BCI performance, 2) proposing evaluation considerations that are neglected in modular views of cognition, and 3) through the direct transfer of its design insights to BCIs. We finally reflect on HCI's understanding of embodiment and identify the neural dimension of embodiment as hitherto overlooked.

Nash equilibrium (NE) is one of the most important solution concepts in game theory and has broad applications in machine learning research. While tremendous empirical success has been reported on learning approximate NE, whether NE is Probably Approximately Correct (PAC) learnable has not been addressed yet. In this paper, we make the first effort to study the PAC learnability of NE in normal-form games. We prove that NE is agnostic PAC learnable if the hypothesis class for Nash predictors has bounded covering numbers. Theoretically, we derive such a result by constructing a self-supervised PAC learning framework based on a commonly-used objective for Nash approximation. Empirically, we provide numerical evidence showing that a Nash predictor trained under our PAC learning framework, even with the most straightforward neural architecture, can efficiently learn the NE for games sampled from the same unknown distribution. Our results justify the feasibility of approximating NE through purely data-driven approaches, which benefits both game theorists and machine learning practitioners.

Recently, blockchain has gained momentum as a novel technology that gives rise to a plethora of new decentralized applications (e.g., Internet of Things (IoT)). However, its integration with the IoT is still facing several problems (e.g., scalability, flexibility). Provisioning resources to enable a large number of connected IoT devices implies having a scalable and flexible blockchain. To address these issues, we propose a scalable and trustworthy blockchain (STB) architecture that is suitable for the IoT; which uses blockchain sharding and oracles to establish trust among unreliable IoT devices in a fully distributed and trustworthy manner. In particular, we design a Peer-To-Peer oracle network that ensures data reliability, scalability, flexibility, and trustworthiness. Furthermore, we introduce a new lightweight consensus algorithm that scales the blockchain dramatically while ensuring the interoperability among participants of the blockchain. The results show that our proposed STB architecture achieves flexibility, efficiency, and scalability making it a promising solution that is suitable for the IoT context.

Autonomous driving has achieved a significant milestone in research and development over the last decade. There is increasing interest in the field as the deployment of self-operating vehicles on roads promises safer and more ecologically friendly transportation systems. With the rise of computationally powerful artificial intelligence (AI) techniques, autonomous vehicles can sense their environment with high precision, make safe real-time decisions, and operate more reliably without human interventions. However, intelligent decision-making in autonomous cars is not generally understandable by humans in the current state of the art, and such deficiency hinders this technology from being socially acceptable. Hence, aside from making safe real-time decisions, the AI systems of autonomous vehicles also need to explain how these decisions are constructed in order to be regulatory compliant across many jurisdictions. Our study sheds a comprehensive light on developing explainable artificial intelligence (XAI) approaches for autonomous vehicles. In particular, we make the following contributions. First, we provide a thorough overview of the present gaps with respect to explanations in the state-of-the-art autonomous vehicle industry. We then show the taxonomy of explanations and explanation receivers in this field. Thirdly, we propose a framework for an architecture of end-to-end autonomous driving systems and justify the role of XAI in both debugging and regulating such systems. Finally, as future research directions, we provide a field guide on XAI approaches for autonomous driving that can improve operational safety and transparency towards achieving public approval by regulators, manufacturers, and all engaged stakeholders.

This dissertation studies a fundamental open challenge in deep learning theory: why do deep networks generalize well even while being overparameterized, unregularized and fitting the training data to zero error? In the first part of the thesis, we will empirically study how training deep networks via stochastic gradient descent implicitly controls the networks' capacity. Subsequently, to show how this leads to better generalization, we will derive {\em data-dependent} {\em uniform-convergence-based} generalization bounds with improved dependencies on the parameter count. Uniform convergence has in fact been the most widely used tool in deep learning literature, thanks to its simplicity and generality. Given its popularity, in this thesis, we will also take a step back to identify the fundamental limits of uniform convergence as a tool to explain generalization. In particular, we will show that in some example overparameterized settings, {\em any} uniform convergence bound will provide only a vacuous generalization bound. With this realization in mind, in the last part of the thesis, we will change course and introduce an {\em empirical} technique to estimate generalization using unlabeled data. Our technique does not rely on any notion of uniform-convergece-based complexity and is remarkably precise. We will theoretically show why our technique enjoys such precision. We will conclude by discussing how future work could explore novel ways to incorporate distributional assumptions in generalization bounds (such as in the form of unlabeled data) and explore other tools to derive bounds, perhaps by modifying uniform convergence or by developing completely new tools altogether.

Over the past few years, we have seen fundamental breakthroughs in core problems in machine learning, largely driven by advances in deep neural networks. At the same time, the amount of data collected in a wide array of scientific domains is dramatically increasing in both size and complexity. Taken together, this suggests many exciting opportunities for deep learning applications in scientific settings. But a significant challenge to this is simply knowing where to start. The sheer breadth and diversity of different deep learning techniques makes it difficult to determine what scientific problems might be most amenable to these methods, or which specific combination of methods might offer the most promising first approach. In this survey, we focus on addressing this central issue, providing an overview of many widely used deep learning models, spanning visual, sequential and graph structured data, associated tasks and different training methods, along with techniques to use deep learning with less data and better interpret these complex models --- two central considerations for many scientific use cases. We also include overviews of the full design process, implementation tips, and links to a plethora of tutorials, research summaries and open-sourced deep learning pipelines and pretrained models, developed by the community. We hope that this survey will help accelerate the use of deep learning across different scientific domains.

Explainable recommendation attempts to develop models that generate not only high-quality recommendations but also intuitive explanations. The explanations may either be post-hoc or directly come from an explainable model (also called interpretable or transparent model in some context). Explainable recommendation tries to address the problem of why: by providing explanations to users or system designers, it helps humans to understand why certain items are recommended by the algorithm, where the human can either be users or system designers. Explainable recommendation helps to improve the transparency, persuasiveness, effectiveness, trustworthiness, and satisfaction of recommendation systems. In this survey, we review works on explainable recommendation in or before the year of 2019. We first highlight the position of explainable recommendation in recommender system research by categorizing recommendation problems into the 5W, i.e., what, when, who, where, and why. We then conduct a comprehensive survey of explainable recommendation on three perspectives: 1) We provide a chronological research timeline of explainable recommendation, including user study approaches in the early years and more recent model-based approaches. 2) We provide a two-dimensional taxonomy to classify existing explainable recommendation research: one dimension is the information source (or display style) of the explanations, and the other dimension is the algorithmic mechanism to generate explainable recommendations. 3) We summarize how explainable recommendation applies to different recommendation tasks, such as product recommendation, social recommendation, and POI recommendation. We also devote a section to discuss the explanation perspectives in broader IR and AI/ML research. We end the survey by discussing potential future directions to promote the explainable recommendation research area and beyond.

Objects are made of parts, each with distinct geometry, physics, functionality, and affordances. Developing such a distributed, physical, interpretable representation of objects will facilitate intelligent agents to better explore and interact with the world. In this paper, we study physical primitive decomposition---understanding an object through its components, each with physical and geometric attributes. As annotated data for object parts and physics are rare, we propose a novel formulation that learns physical primitives by explaining both an object's appearance and its behaviors in physical events. Our model performs well on block towers and tools in both synthetic and real scenarios; we also demonstrate that visual and physical observations often provide complementary signals. We further present ablation and behavioral studies to better understand our model and contrast it with human performance.

Images account for a significant part of user decisions in many application scenarios, such as product images in e-commerce, or user image posts in social networks. It is intuitive that user preferences on the visual patterns of image (e.g., hue, texture, color, etc) can be highly personalized, and this provides us with highly discriminative features to make personalized recommendations. Previous work that takes advantage of images for recommendation usually transforms the images into latent representation vectors, which are adopted by a recommendation component to assist personalized user/item profiling and recommendation. However, such vectors are hardly useful in terms of providing visual explanations to users about why a particular item is recommended, and thus weakens the explainability of recommendation systems. As a step towards explainable recommendation models, we propose visually explainable recommendation based on attentive neural networks to model the user attention on images, under the supervision of both implicit feedback and textual reviews. By this, we can not only provide recommendation results to the users, but also tell the users why an item is recommended by providing intuitive visual highlights in a personalized manner. Experimental results show that our models are not only able to improve the recommendation performance, but also can provide persuasive visual explanations for the users to take the recommendations.

北京阿比特科技有限公司