亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Software citation has accelerated in astrophysics in the past decade, resulting in the field now having multiple trackable ways to cite computational methods. Yet most software authors do not specify how they would like their code to be cited, while others specify a citation method that is not easily tracked (or tracked at all) by most indexers. Two metadata file formats, codemeta.json and CITATION.cff, developed in 2016 and 2017 respectively, are useful for specifying how software should be cited. In 2020, the Astrophysics Source Code Library (ASCL, ascl.net) undertook a year-long effort to generate and send these software metadata files, specific to each computational method, to code authors for editing and inclusion on their code sites. We wanted to answer the question, "Would sending these files to software authors increase adoption of one, the other, or both of these metadata files?" The answer in this case was no. Furthermore, only 41% of the 135 code sites examined for use of these files had citation information in any form available. The lack of such information creates an obstacle for article authors to provide credit to software creators, thus hindering citation of and recognition for computational contributions to research and the scientists who develop and maintain software.

相關內容

Non-volatile memory (NVM), also known as persistent memory, is an emerging paradigm for memory that preserves its contents even after power loss. NVM is widely expected to become ubiquitous, and hardware architectures are already providing support for NVM programming. This has stimulated interest in the design of novel concepts ensuring correctness of concurrent programming abstractions in the face of persistency and in the development of associated verification approaches. Software transactional memory (STM) is a key programming abstraction that supports concurrent access to shared state. In a fashion similar to linearizability as the correctness condition for concurrent data structures, there is an established notion of correctness for STMs known as opacity. We have recently proposed durable opacity as the natural extension of opacity to a setting with non-volatile memory. Together with this novel correctness condition, we designed a verification technique based on refinement. In this paper, we extend this work in two directions. First, we develop a durably opaque version of NOrec (no ownership records), an existing STM algorithm proven to be opaque. Second, we modularise our existing verification approach by separating the proof of durability of memory accesses from the proof of opacity. For NOrec, this allows us to re-use an existing opacity proof and complement it with a proof of the durability of accesses to shared state.

Various fields of science face a reproducibility crisis. For quantum software engineering as an emerging field, it is therefore imminent to focus on proper reproducibility engineering from the start. Yet the provision of reproduction packages is almost universally lacking. Actionable advice on how to build such packages is rare, particularly unfortunate in a field with many contributions from researchers with backgrounds outside computer science. In this article, we argue how to rectify this deficiency by proposing a 1-2-3~approach to reproducibility engineering for quantum software experiments: Using a meta-generation mechanism, we generate DOI-safe, long-term functioning and dependency-free reproduction packages. They are designed to satisfy the requirements of professional and learned societies solely on the basis of project-specific research artefacts (source code, measurement and configuration data), and require little temporal investment by researchers. Our scheme ascertains long-term traceability even when the quantum processor itself is no longer accessible. By drastically lowering the technical bar, we foster the proliferation of reproduction packages in quantum software experiments and ease the inclusion of non-CS researchers entering the field.

Cloud computing has become a major approach to help reproduce computational experiments because it supports on-demand hardware and software resource provisioning. Yet there are still two main difficulties in reproducing big data applications in the cloud. The first is how to automate end-to-end execution of analytics including environment provisioning, analytics pipeline description, pipeline execution and resource termination. The second is an application developed for one cloud, is difficult to be reproduced in another cloud, a.k.a. vendor lock-in problem. To tackle these problems, we leverage serverless computing and containerization techniques for automatic scalable execution and reproducibility, and utilize the adapter design pattern to enable application portability and reproducibility across different clouds. We propose and develop an open-source toolkit that supports 1) fully automated end-to-end execution and reproduction via a single command, 2) automatic data and configuration storage for each execution, 3) flexible client modes based on user preferences, 4) execution history query, and 5) simple reproduction of existing executions in the same environment or a different environment. We did extensive experiments on both AWS and Azure using three big data analytics applications that run on a virtual CPU/GPU cluster. The experiments show our toolkit can achieve good execution performance, scalability and efficient reproducibility for cloud-based big data analytics.

Recently, blockchain has gained momentum as a novel technology that gives rise to a plethora of new decentralized applications (e.g., Internet of Things (IoT)). However, its integration with the IoT is still facing several problems (e.g., scalability, flexibility). Provisioning resources to enable a large number of connected IoT devices implies having a scalable and flexible blockchain. To address these issues, we propose a scalable and trustworthy blockchain (STB) architecture that is suitable for the IoT; which uses blockchain sharding and oracles to establish trust among unreliable IoT devices in a fully distributed and trustworthy manner. In particular, we design a Peer-To-Peer oracle network that ensures data reliability, scalability, flexibility, and trustworthiness. Furthermore, we introduce a new lightweight consensus algorithm that scales the blockchain dramatically while ensuring the interoperability among participants of the blockchain. The results show that our proposed STB architecture achieves flexibility, efficiency, and scalability making it a promising solution that is suitable for the IoT context.

The rapid changes in the finance industry due to the increasing amount of data have revolutionized the techniques on data processing and data analysis and brought new theoretical and computational challenges. In contrast to classical stochastic control theory and other analytical approaches for solving financial decision-making problems that heavily reply on model assumptions, new developments from reinforcement learning (RL) are able to make full use of the large amount of financial data with fewer model assumptions and to improve decisions in complex financial environments. This survey paper aims to review the recent developments and use of RL approaches in finance. We give an introduction to Markov decision processes, which is the setting for many of the commonly used RL approaches. Various algorithms are then introduced with a focus on value and policy based methods that do not require any model assumptions. Connections are made with neural networks to extend the framework to encompass deep RL algorithms. Our survey concludes by discussing the application of these RL algorithms in a variety of decision-making problems in finance, including optimal execution, portfolio optimization, option pricing and hedging, market making, smart order routing, and robo-advising.

The explanation dimension of Artificial Intelligence (AI) based system has been a hot topic for the past years. Different communities have raised concerns about the increasing presence of AI in people's everyday tasks and how it can affect people's lives. There is a lot of research addressing the interpretability and transparency concepts of explainable AI (XAI), which are usually related to algorithms and Machine Learning (ML) models. But in decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system. Decision-makers usually need to justify their decision to others in different domains. If that decision is somehow based on or influenced by an AI-system outcome, the explanation about how the AI reached that result is key to building trust between AI and humans in decision-making scenarios. In this position paper, we discuss the role of XAI in decision-making scenarios, our vision of Decision-Making with AI-system in the loop, and explore one case from the literature about how XAI can impact people justifying their decisions, considering the importance of building the human-AI relationship for those scenarios.

Recent advances in sensor and mobile devices have enabled an unprecedented increase in the availability and collection of urban trajectory data, thus increasing the demand for more efficient ways to manage and analyze the data being produced. In this survey, we comprehensively review recent research trends in trajectory data management, ranging from trajectory pre-processing, storage, common trajectory analytic tools, such as querying spatial-only and spatial-textual trajectory data, and trajectory clustering. We also explore four closely related analytical tasks commonly used with trajectory data in interactive or real-time processing. Deep trajectory learning is also reviewed for the first time. Finally, we outline the essential qualities that a trajectory management system should possess in order to maximize flexibility.

Topic models are among the most widely used methods in natural language processing, allowing researchers to estimate the underlying themes in a collection of documents. Most topic models use unsupervised methods and hence require the additional step of attaching meaningful labels to estimated topics. This process of manual labeling is not scalable and often problematic because it depends on the domain expertise of the researcher and may be affected by cardinality in human decision making. As a consequence, insights drawn from a topic model are difficult to replicate. We present a semi-automatic transfer topic labeling method that seeks to remedy some of these problems. We take advantage of the fact that domain-specific codebooks exist in many areas of research that can be exploited for automated topic labeling. We demonstrate our approach with a dynamic topic model analysis of the complete corpus of UK House of Commons speeches from 1935 to 2014, using the coding instructions of the Comparative Agendas Project to label topics. We show that our method works well for a majority of the topics we estimate, but we also find institution-specific topics, in particular on subnational governance, that require manual input. The method proposed in the paper can be easily extended to other areas with existing domain-specific knowledge bases, such as party manifestos, open-ended survey questions, social media data, and legal documents, in ways that can add knowledge to research programs.

The goal in the NER task is to classify proper nouns of a text into classes such as person, location, and organization. This is an important preprocessing step in many NLP tasks such as question-answering and summarization. Although many research studies have been conducted in this area in English and the state-of-the-art NER systems have reached performances of higher than 90 percent in terms of F1 measure, there are very few research studies for this task in Persian. One of the main important causes of this may be the lack of a standard Persian NER dataset to train and test NER systems. In this research we create a standard, big-enough tagged Persian NER dataset which will be distributed for free for research purposes. In order to construct such a standard dataset, we studied standard NER datasets which are constructed for English researches and found out that almost all of these datasets are constructed using news texts. So we collected documents from ten news websites. Later, in order to provide annotators with some guidelines to tag these documents, after studying guidelines used for constructing CoNLL and MUC standard English datasets, we set our own guidelines considering the Persian linguistic rules.

Conversational systems have come a long way after decades of research and development, from Eliza and Parry in the 60's and 70's, to task-completion systems as in the ATIS project, to intelligent personal assistants such as Siri, and to today's social chatbots like XiaoIce. Social chatbots' appeal lies in not only their ability to respond to users' diverse requests, but also in being able to establish an emotional connection with users. The latter is done by satisfying the users' essential needs for communication, affection, and social belonging. The design of social chatbots must focus on user engagement and take both intellectual quotient (IQ) and emotional quotient (EQ) into account. Users should want to engage with the social chatbot; as such, we define the success metric for social chatbots as conversation-turns per session (CPS). Using XiaoIce as an illustrative example, we discuss key technologies in building social chatbots from core chat to visual sense to skills. We also show how XiaoIce can dynamically recognize emotion and engage the user throughout long conversations with appropriate interpersonal responses. As we become the first generation of humans ever living with AI, social chatbots that are well-designed to be both useful and empathic will soon be ubiquitous.

小貼士
登錄享
相關主題
北京阿比特科技有限公司