亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

With the increasing number of created and deployed prediction models and the complexity of machine learning workflows we require so called model management systems to support data scientists in their tasks. In this work we describe our technological concept for such a model management system. This concept includes versioned storage of data, support for different machine learning algorithms, fine tuning of models, subsequent deployment of models and monitoring of model performance after deployment. We describe this concept with a close focus on model lifecycle requirements stemming from our industry application cases, but generalize key features that are relevant for all applications of machine learning.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 生成器網絡 · Networking · prototype · BASIC ·
2021 年 9 月 29 日

Disruptive changes in vehicles and transportation have been triggered by automated, connected, electrified and shared mobility. Autonomous vehicles, like Internet data packets, are transported from one address to another through the road network. The Internet has become a general network transmission paradigm, and the Energy Internet is a successful application of this paradigm to the field of energy. By introducing the Internet paradigm to the field of transportation, this paper is the first to propose the Transportation Internet. Based on the concept of the Transportation Internet, fundamental models, such as the switching, routing, and hierarchical models, are established to form basic theories; new architectures, such as transportation routers and software defined transportation, are proposed to make transportation interconnected and open; system verifications, such as prototype and simulation, are also carried out to prove feasibility and advancement. The Transportation Internet, which is of far-reaching significance in science and industry, has brought systematic breakthroughs in theory, architecture, and technology, explored innovative research directions, and provided an Internet-like solution for the new generation of transportation.

The technical debt (TD) metaphor describes actions made during various stages of software development that lead to a more costly future regarding system maintenance and evolution. According to recent studies, on average 25% of development effort is spent, i.e. wasted, on TD caused issues in software development organizations. However, further research is needed to investigate the relations between various software development activities and TD. The objective of this study is twofold. First, to get empirical insight on the understanding and the use of the TD concept in the IT industry. Second, to contribute towards precise conceptualization of the TD concept through analysis of causes and effects. In order to address the research objective a family of surveys was designed as a part of an international initiative that congregates researchers from 12 countries -- InsighTD. At country level, national teams ran survey replications with industry practitioners from the respective countries. In total 653 valid responses were collected from 6 countries. Regarding the prevalence of the TD concept 22% of practitioners have only theoretical knowledge about it, and 47% have some practical experiences with TD identification or management. Further analysis indicated that senior practitioners who work in larger organizations, larger teams, and on larger systems are more likely to be experienced with TD management. Time pressure or deadline was the single most cited cause of TD. Regarding the effects of TD: delivery delay, low maintainability, and rework were the most cited. InsighTD is the first family of surveys on technical debt in software engineering. It provided a methodological framework that allowed multiple replication teams to conduct research activities and to contribute to a single dataset. Future work will focus on more specific aspects of TD management.

Human-technology interaction deals with trust as an inevitable requirement for user acceptance. As the applications of artificial intelligence (AI) and robotics emerge and with their ever-growing socio-economic influence in various fields of research and practice, there is an imminent need to study trust in such systems. With the opaque work mechanism of AI-based systems and the prospect of intelligent robots as workers' companions, context-specific interdisciplinary studies on trust are key in increasing their adoption. Through a thorough systematic literature review on (1) trust in AI and robotics (AIR) and (2) AIR applications in the architecture, engineering, and construction (AEC) industry, this study identifies common trust dimensions in the literature and uses them to organize the paper. Furthermore, the connections of the identified dimensions to the existing and potential AEC applications are determined and discussed. Finally, major future directions on trustworthy AI and robotics in AEC research and practice are outlined.

Transformer-based pretrained language models (T-PTLMs) have achieved great success in almost every NLP task. The evolution of these models started with GPT and BERT. These models are built on the top of transformers, self-supervised learning and transfer learning. Transformed-based PTLMs learn universal language representations from large volumes of text data using self-supervised learning and transfer this knowledge to downstream tasks. These models provide good background knowledge to downstream tasks which avoids training of downstream models from scratch. In this comprehensive survey paper, we initially give a brief overview of self-supervised learning. Next, we explain various core concepts like pretraining, pretraining methods, pretraining tasks, embeddings and downstream adaptation methods. Next, we present a new taxonomy of T-PTLMs and then give brief overview of various benchmarks including both intrinsic and extrinsic. We present a summary of various useful libraries to work with T-PTLMs. Finally, we highlight some of the future research directions which will further improve these models. We strongly believe that this comprehensive survey paper will serve as a good reference to learn the core concepts as well as to stay updated with the recent happenings in T-PTLMs.

Attention Model has now become an important concept in neural networks that has been researched within diverse application domains. This survey provides a structured and comprehensive overview of the developments in modeling attention. In particular, we propose a taxonomy which groups existing techniques into coherent categories. We review salient neural architectures in which attention has been incorporated, and discuss applications in which modeling attention has shown a significant impact. Finally, we also describe how attention has been used to improve the interpretability of neural networks. We hope this survey will provide a succinct introduction to attention models and guide practitioners while developing approaches for their applications.

In recent years, the fields of natural language processing (NLP) and information retrieval (IR) have made tremendous progress thanks to deep learning models like Recurrent Neural Networks (RNNs), Gated Recurrent Units (GRUs) and Long Short-Term Memory (LSTMs) networks, and Transformer based models like Bidirectional Encoder Representations from Transformers (BERT). But these models are humongous in size. On the other hand, real world applications demand small model size, low response times and low computational power wattage. In this survey, we discuss six different types of methods (Pruning, Quantization, Knowledge Distillation, Parameter Sharing, Tensor Decomposition, and Linear Transformer based methods) for compression of such models to enable their deployment in real industry NLP projects. Given the critical need of building applications with efficient and small models, and the large amount of recently published work in this area, we believe that this survey organizes the plethora of work done by the 'deep learning for NLP' community in the past few years and presents it as a coherent story.

In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.

Deep reinforcement learning is the combination of reinforcement learning (RL) and deep learning. This field of research has been able to solve a wide range of complex decision-making tasks that were previously out of reach for a machine. Thus, deep RL opens up many new applications in domains such as healthcare, robotics, smart grids, finance, and many more. This manuscript provides an introduction to deep reinforcement learning models, algorithms and techniques. Particular focus is on the aspects related to generalization and how deep RL can be used for practical applications. We assume the reader is familiar with basic machine learning concepts.

Machine Learning models become increasingly proficient in complex tasks. However, even for experts in the field, it can be difficult to understand what the model learned. This hampers trust and acceptance, and it obstructs the possibility to correct the model. There is therefore a need for transparency of machine learning models. The development of transparent classification models has received much attention, but there are few developments for achieving transparent Reinforcement Learning (RL) models. In this study we propose a method that enables a RL agent to explain its behavior in terms of the expected consequences of state transitions and outcomes. First, we define a translation of states and actions to a description that is easier to understand for human users. Second, we developed a procedure that enables the agent to obtain the consequences of a single action, as well as its entire policy. The method calculates contrasts between the consequences of a policy derived from a user query, and of the learned policy of the agent. Third, a format for generating explanations was constructed. A pilot survey study was conducted to explore preferences of users for different explanation properties. Results indicate that human users tend to favor explanations about policy rather than about single actions.

The field of Multi-Agent System (MAS) is an active area of research within Artificial Intelligence, with an increasingly important impact in industrial and other real-world applications. Within a MAS, autonomous agents interact to pursue personal interests and/or to achieve common objectives. Distributed Constraint Optimization Problems (DCOPs) have emerged as one of the prominent agent architectures to govern the agents' autonomous behavior, where both algorithms and communication models are driven by the structure of the specific problem. During the last decade, several extensions to the DCOP model have enabled them to support MAS in complex, real-time, and uncertain environments. This survey aims at providing an overview of the DCOP model, giving a classification of its multiple extensions and addressing both resolution methods and applications that find a natural mapping within each class of DCOPs. The proposed classification suggests several future perspectives for DCOP extensions, and identifies challenges in the design of efficient resolution algorithms, possibly through the adaptation of strategies from different areas.

北京阿比特科技有限公司