亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Machine Learning (ML) research publications commonly provide open-source implementations on GitHub, allowing their audience to replicate, validate, or even extend machine learning algorithms, data sets, and metadata. However, thus far little is known about the degree of collaboration activity happening on such ML research repositories, in particular regarding (1) the degree to which such repositories receive contributions from forks, (2) the nature of such contributions (i.e., the types of changes), and (3) the nature of changes that are not contributed back to forks, which might represent missed opportunities. In this paper, we empirically study contributions to 1,346 ML research repositories and their 67,369 forks, both quantitatively and qualitatively (by building on Hindle et al.'s seminal taxonomy of code changes). We found that while ML research repositories are heavily forked, only 9% of the forks made modifications to the forked repository. 42% of the latter sent changes to the parent repositories, half of which (52%) were accepted by the parent repositories. Our qualitative analysis on 539 contributed and 378 local (fork-only) changes, extends Hindle et al.'s taxonomy with one new top-level change category related to ML (Data), and 15 new sub-categories, including nine ML-specific ones (input data, output data, program data, sharing, change evaluation, parameter tuning, performance, pre-processing, model training). While the changes that are not contributed back by the forks mostly concern domain-specific customizations and local experimentation (e.g., parameter tuning), the origin ML repositories do miss out on a non-negligible 15.4% of Documentation changes, 13.6% of Feature changes and 11.4% of Bug fix changes. The findings in this paper will be useful for practitioners, researchers, toolsmiths, and educators.

相關內容

分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)學是(shi)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)的(de)(de)(de)(de)(de)實(shi)踐和科學。Wikipedia類(lei)(lei)(lei)(lei)(lei)(lei)別說明(ming)了一種(zhong)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa),可以(yi)通過自動(dong)方式提取(qu)Wikipedia類(lei)(lei)(lei)(lei)(lei)(lei)別的(de)(de)(de)(de)(de)完整分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)。截至2009年,已經證明(ming),可以(yi)使用(yong)(yong)人(ren)工(gong)構(gou)(gou)建的(de)(de)(de)(de)(de)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)(例如像WordNet這樣的(de)(de)(de)(de)(de)計(ji)算詞(ci)(ci)典(dian)的(de)(de)(de)(de)(de)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa))來改進和重(zhong)組Wikipedia類(lei)(lei)(lei)(lei)(lei)(lei)別分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)。 從(cong)廣義上講(jiang),分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)還適(shi)用(yong)(yong)于(yu)除父(fu)(fu)子(zi)層次結(jie)構(gou)(gou)以(yi)外的(de)(de)(de)(de)(de)關系(xi)方案,例如網絡結(jie)構(gou)(gou)。然(ran)后分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)可能包括(kuo)有(you)多父(fu)(fu)母(mu)的(de)(de)(de)(de)(de)單身孩子(zi),例如,“汽(qi)車”可能與父(fu)(fu)母(mu)雙方一起(qi)出現(xian)“車輛”和“鋼(gang)結(jie)構(gou)(gou)”;但是(shi)對某些人(ren)而言(yan),這僅意(yi)味著“汽(qi)車”是(shi)幾種(zhong)不同分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)的(de)(de)(de)(de)(de)一部分(fen)(fen)。分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)也可能只(zhi)是(shi)將事物(wu)組織(zhi)成組,或者(zhe)是(shi)按字母(mu)順序排列的(de)(de)(de)(de)(de)列表;但是(shi)在這里,術語詞(ci)(ci)匯更合適(shi)。在知識(shi)管理中(zhong)的(de)(de)(de)(de)(de)當前(qian)用(yong)(yong)法(fa)(fa)(fa)中(zhong),分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)被認為比(bi)本體(ti)論窄,因為本體(ti)論應用(yong)(yong)了各種(zhong)各樣的(de)(de)(de)(de)(de)關系(xi)類(lei)(lei)(lei)(lei)(lei)(lei)型。 在數學上,分(fen)(fen)層分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)法(fa)(fa)(fa)是(shi)給定對象(xiang)集(ji)的(de)(de)(de)(de)(de)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)樹(shu)結(jie)構(gou)(gou)。該(gai)結(jie)構(gou)(gou)的(de)(de)(de)(de)(de)頂部是(shi)適(shi)用(yong)(yong)于(yu)所(suo)有(you)對象(xiang)的(de)(de)(de)(de)(de)單個分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei),即(ji)根節點。此根下的(de)(de)(de)(de)(de)節點是(shi)更具(ju)(ju)體(ti)的(de)(de)(de)(de)(de)分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei),適(shi)用(yong)(yong)于(yu)總分(fen)(fen)類(lei)(lei)(lei)(lei)(lei)(lei)對象(xiang)集(ji)的(de)(de)(de)(de)(de)子(zi)集(ji)。推理的(de)(de)(de)(de)(de)進展(zhan)從(cong)一般到更具(ju)(ju)體(ti)。

知識薈萃

精(jing)品入門和(he)進階教程、論文(wen)和(he)代碼整(zheng)理等(deng)

更多

查看相關VIP內(nei)容、論文、資訊等

Recent advances of data-driven machine learning have revolutionized fields like computer vision, reinforcement learning, and many scientific and engineering domains. In many real-world and scientific problems, systems that generate data are governed by physical laws. Recent work shows that it provides potential benefits for machine learning models by incorporating the physical prior and collected data, which makes the intersection of machine learning and physics become a prevailing paradigm. In this survey, we present this learning paradigm called Physics-Informed Machine Learning (PIML) which is to build a model that leverages empirical data and available physical prior knowledge to improve performance on a set of tasks that involve a physical mechanism. We systematically review the recent development of physics-informed machine learning from three perspectives of machine learning tasks, representation of physical prior, and methods for incorporating physical prior. We also propose several important open research problems based on the current trends in the field. We argue that encoding different forms of physical prior into model architectures, optimizers, inference algorithms, and significant domain-specific applications like inverse engineering design and robotic control is far from fully being explored in the field of physics-informed machine learning. We believe that this study will encourage researchers in the machine learning community to actively participate in the interdisciplinary research of physics-informed machine learning.

Multimodal machine learning is a vibrant multi-disciplinary research field that aims to design computer agents with intelligent capabilities such as understanding, reasoning, and learning through integrating multiple communicative modalities, including linguistic, acoustic, visual, tactile, and physiological messages. With the recent interest in video understanding, embodied autonomous agents, text-to-image generation, and multisensor fusion in application domains such as healthcare and robotics, multimodal machine learning has brought unique computational and theoretical challenges to the machine learning community given the heterogeneity of data sources and the interconnections often found between modalities. However, the breadth of progress in multimodal research has made it difficult to identify the common themes and open questions in the field. By synthesizing a broad range of application domains and theoretical frameworks from both historical and recent perspectives, this paper is designed to provide an overview of the computational and theoretical foundations of multimodal machine learning. We start by defining two key principles of modality heterogeneity and interconnections that have driven subsequent innovations, and propose a taxonomy of 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification covering historical and recent trends. Recent technical achievements will be presented through the lens of this taxonomy, allowing researchers to understand the similarities and differences across new approaches. We end by motivating several open problems for future research as identified by our taxonomy.

Neural architecture-based recommender systems have achieved tremendous success in recent years. However, when dealing with highly sparse data, they still fall short of expectation. Self-supervised learning (SSL), as an emerging technique to learn with unlabeled data, recently has drawn considerable attention in many fields. There is also a growing body of research proceeding towards applying SSL to recommendation for mitigating the data sparsity issue. In this survey, a timely and systematical review of the research efforts on self-supervised recommendation (SSR) is presented. Specifically, we propose an exclusive definition of SSR, on top of which we build a comprehensive taxonomy to divide existing SSR methods into four categories: contrastive, generative, predictive, and hybrid. For each category, the narrative unfolds along its concept and formulation, the involved methods, and its pros and cons. Meanwhile, to facilitate the development and evaluation of SSR models, we release an open-source library SELFRec, which incorporates multiple benchmark datasets and evaluation metrics, and has implemented a number of state-of-the-art SSR models for empirical comparison. Finally, we shed light on the limitations in the current research and outline the future research directions.

Recent times are witnessing rapid development in machine learning algorithm systems, especially in reinforcement learning, natural language processing, computer and robot vision, image processing, speech, and emotional processing and understanding. In tune with the increasing importance and relevance of machine learning models, algorithms, and their applications, and with the emergence of more innovative uses cases of deep learning and artificial intelligence, the current volume presents a few innovative research works and their applications in real world, such as stock trading, medical and healthcare systems, and software automation. The chapters in the book illustrate how machine learning and deep learning algorithms and models are designed, optimized, and deployed. The volume will be useful for advanced graduate and doctoral students, researchers, faculty members of universities, practicing data scientists and data engineers, professionals, and consultants working on the broad areas of machine learning, deep learning, and artificial intelligence.

Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.

Despite its great success, machine learning can have its limits when dealing with insufficient training data. A potential solution is the additional integration of prior knowledge into the training process which leads to the notion of informed machine learning. In this paper, we present a structured overview of various approaches in this field. We provide a definition and propose a concept for informed machine learning which illustrates its building blocks and distinguishes it from conventional machine learning. We introduce a taxonomy that serves as a classification framework for informed machine learning approaches. It considers the source of knowledge, its representation, and its integration into the machine learning pipeline. Based on this taxonomy, we survey related research and describe how different knowledge representations such as algebraic equations, logic rules, or simulation results can be used in learning systems. This evaluation of numerous papers on the basis of our taxonomy uncovers key methods in the field of informed machine learning.

This paper serves as a survey of recent advances in large margin training and its theoretical foundations, mostly for (nonlinear) deep neural networks (DNNs) that are probably the most prominent machine learning models for large-scale data in the community over the past decade. We generalize the formulation of classification margins from classical research to latest DNNs, summarize theoretical connections between the margin, network generalization, and robustness, and introduce recent efforts in enlarging the margins for DNNs comprehensively. Since the viewpoint of different methods is discrepant, we categorize them into groups for ease of comparison and discussion in the paper. Hopefully, our discussions and overview inspire new research work in the community that aim to improve the performance of DNNs, and we also point to directions where the large margin principle can be verified to provide theoretical evidence why certain regularizations for DNNs function well in practice. We managed to shorten the paper such that the crucial spirit of large margin learning and related methods are better emphasized.

It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.

The demand for artificial intelligence has grown significantly over the last decade and this growth has been fueled by advances in machine learning techniques and the ability to leverage hardware acceleration. However, in order to increase the quality of predictions and render machine learning solutions feasible for more complex applications, a substantial amount of training data is required. Although small machine learning models can be trained with modest amounts of data, the input for training larger models such as neural networks grows exponentially with the number of parameters. Since the demand for processing training data has outpaced the increase in computation power of computing machinery, there is a need for distributing the machine learning workload across multiple machines, and turning the centralized into a distributed system. These distributed systems present new challenges, first and foremost the efficient parallelization of the training process and the creation of a coherent model. This article provides an extensive overview of the current state-of-the-art in the field by outlining the challenges and opportunities of distributed machine learning over conventional (centralized) machine learning, discussing the techniques used for distributed machine learning, and providing an overview of the systems that are available.

Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research.

北京阿比特科技有限公司