亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Quantum systems have started to emerge as a disruptive technology and enabling platforms - exploiting the principles of quantum mechanics - to achieve quantum supremacy in computing. Academic research, industrial projects (e.g., Amazon Braket), and consortiums like 'Quantum Flagship' are striving to develop practically capable and commercially viable quantum computing (QC) systems and technologies. Quantum Computing as a Service (QCaaS) is viewed as a solution attuned to the philosophy of service-orientation that can offer QC resources and platforms, as utility computing, to individuals and organisations who do not own quantum computers. To understand the quantum service development life cycle and pinpoint emerging trends, we used evidence-based software engineering approach to conduct a systematic mapping study (SMS) of research that enables or enhances QCaaS. The SMS process retrieved a total of 55 studies, and based on their qualitative assessment we selected 9 of them to investigate (i) the functional aspects, design models, patterns, programming languages, deployment platforms, and (ii) trends of emerging research on QCaaS. The results indicate three modelling notations and a catalogue of five design patterns to architect QCaaS, whereas Python (native code or frameworks) and Amazon Braket are the predominant solutions to implement and deploy QCaaS solutions. From the quantum software engineering (QSE) perspective, this SMS provides empirically grounded findings that could help derive processes, patterns, and reference architectures to engineer software services for QC.

相關內容

量子計算是一種遵循量子力學規律調控量子信息單元進行計算的新型計算模式。對照于傳統的通用計算機,其理論模型是通用圖靈機;通用的量子計算機,其理論模型是用量子力學規律重新詮釋的通用圖靈機。從可計算的問題來看,量子計算機只能解決傳統計算機所能解決的問題,但是從計算的效率上,由于量子力學疊加性的存在,目前某些已知的量子算法在處理問題時速度要快于傳統的通用計算機。

知識薈萃

精品入門和進階教程、論文和代碼整理等

更多

查看相關VIP內容、論文、資訊等

Advances in networks, accelerators, and cloud services encourage programmers to reconsider where to compute -- such as when fast networks make it cost-effective to compute on remote accelerators despite added latency. Workflow and cloud-hosted serverless computing frameworks can manage multi-step computations spanning federated collections of cloud, high-performance computing (HPC), and edge systems, but passing data among computational steps via cloud storage can incur high costs. Here, we overcome this obstacle with a new programming paradigm that decouples control flow from data flow by extending the pass-by-reference model to distributed applications. We describe ProxyStore, a system that implements this paradigm by providing object proxies that act as wide-area object references with just-in-time resolution. This proxy model enables data producers to communicate data unilaterally, transparently, and efficiently to both local and remote consumers. We demonstrate the benefits of this model with synthetic benchmarks and real-world scientific applications, running across various computing platforms.

This document is an elementary introduction to string diagrams. It takes a computer science perspective: rather than using category theory as a starting point, we build on intuitions from formal language theory, treating string diagrams as a syntax with its semantics. After the basic theory, pointers are provided to contemporary applications of string diagrams in various fields of science.

The paper presents a technique for constructing noisy data structures called a walking tree. We apply it for a Red-Black tree (an implementation of a Self-Balanced Binary Search Tree) and a segment tree. We obtain the same complexity of the main operations for these data structures as in the case without noise (asymptotically). We present several applications of the data structures for quantum algorithms. Finally, we suggest new quantum solution for strings sorting problem and show the lower bound. The upper and lower bounds are the same up to a log factor. At the same time, it is more effective than classical counterparts.

The rationale of this work is based on the current user trust discourse of Artificial Intelligence (AI). We aim to produce novel HCI approaches that use trust as a facilitator for the uptake (or appropriation) of current technologies. We propose a framework (HCTFrame) to guide non-experts to unlock the full potential of user trust in AI design. Results derived from a data triangulation of findings from three literature reviews demystify some misconceptions of user trust in computer science and AI discourse, and three case studies are conducted to assess the effectiveness of a psychometric scale in mapping potential users' trust breakdowns and concerns. This work primarily contributes to the fight against the tendency to design technical-centered vulnerable interactions, which can eventually lead to additional real and perceived breaches of trust. The proposed framework can be used to guide system designers on how to map and define user trust and the socioethical and organisational needs and characteristics of AI system design. It can also guide AI system designers on how to develop a prototype and operationalise a solution that meets user trust requirements. The article ends by providing some user research tools that can be employed to measure users' trust intentions and behaviours towards a proposed solution.

[Background] The MVP concept has influenced the way in which development teams apply Software Engineering practices. However, the overall understanding of this influence of MVPs on SE practices is still poor. [Objective] Our goal is to characterize the publication landscape on practices that have been used in the context of software MVPs and to gather practitioner insights on the identified practices. [Method] We conducted a systematic mapping study and discussed its results in two focus groups sessions involving twelve industry practitioners that extensively use MVPs in their projects to capture their perceptions on the findings of the mapping study. [Results] We identified 33 papers published between 2013 and 2020 and observed some trends related to MVP ideation and evaluation practices. For instance, regarding ideation, we found six different approaches and mainly informal end-user involvement practices. Regarding evaluation, there is an emphasis on end-user validations based on practices such as usability tests, A/B testing, and usage data analysis. However, there is still limited research related to MVP technical feasibility assessment and effort estimation. Practitioners of the focus group sessions reinforced the confidence in our results regarding ideation and evaluation practices, being aware of most of the identified practices. They also reported how they deal with the technical feasibility assessments and effort estimation in practice. [Conclusion] Our analysis suggests that there are opportunities for solution proposals and evaluation studies to address literature gaps concerning technical feasibility assessment and effort estimation. Overall, more effort needs to be invested into empirically evaluating the existing MVP-related practices.

SQL query performance is critical in database applications, and query rewriting is a technique that transforms an original query into an equivalent query with a better performance. In a wide range of database-supported systems, there is a unique problem where both the application and database layer are black boxes, and the developers need to use their knowledge about the data and domain to rewrite queries sent from the application to the database for better performance. Unfortunately, existing solutions do not give the users enough freedom to express their rewriting needs. To address this problem, we propose QueryBooster, a novel middleware-based service architecture for human-centered query rewriting, where users can use its expressive and easy-to-use rule language (called VarSQL) to formulate rewriting rules based on their needs. It also allows users to express rewriting intentions by providing examples of the original query and its rewritten query. QueryBooster automatically generalizes them to rewriting rules and suggests high-quality ones. We conduct a user study to show the benefits of VarSQL to formulate rewriting rules. Our experiments on real and synthetic workloads show the effectiveness of the rule-suggesting framework and the significant advantages of using QueryBooster for human-centered query rewriting to improve the end-to-end query performance.

Introduction: The Canadian Guidelines recommend physical activity for overall health benefits, including cognitive, emotional, functional, and physical health. However, traditional research methods are inefficient and outdated. This paper aims to guide researchers in enhancing their research methods using software development kits and wearable smart devices. Methods: A generic model application was transformed into a research-based mobile application based on the UCLA researchers who collaborated with Apple. First, the research question and goals were identified. Then, three open-source software development kits (SDKs) were used to modify the generic model into the desired application. ResearchKit was used for informed consent, surveys, and active tasks. CareKit was the protocol manager to create participant protocols and track progress. Finally, HealthKit was used to access and share health-related data. The content expert evaluated the application, and the participant experience was optimized for easy use. The collected health-related data were analyzed to identify any significant findings. Results: Wearable health devices offer a convenient and non-invasive way to monitor and track health-related information. Conclusion: Leveraging the data provided by wearable devices, researchers can gain insights into the effectiveness of interventions and inform the development of evidence-based physical activity guidelines. The use of software development kits and wearable devices can enhance research methods and provide valuable insights into overall health benefits.

This paper generalizes results in noncoherent space-time block code (STBC) design based on quantum error correction (QEC) to new antenna configurations. Previous work proposed QEC-inspired STBCs for antenna geometries where the number of transmit and receive antennas were equal and a power of two. In this work we extend these results by providing QEC-inspired STBCs applicable to all square antenna geometries and some rectangular geometries where the number of receive antennas is greater than the number of transmit antennas. We derive the maximum-likelihood decoding rule for this family of codes for the special case of Rayleigh fading with additive white Gaussian noise. We present Monte Carlo simulations of the performance of the codes in this environment for a three-antenna square geometry and a three-by-six rectangular geometry. We demonstrate competitive performance for these codes with respect to a popular noncoherent differential code.

Driven by the visions of Internet of Things and 5G communications, the edge computing systems integrate computing, storage and network resources at the edge of the network to provide computing infrastructure, enabling developers to quickly develop and deploy edge applications. Nowadays the edge computing systems have received widespread attention in both industry and academia. To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing systems and introduces representative projects. A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems. Open issues for analyzing and designing an edge computing system are also studied in this survey.

There is a resurgent interest in developing intelligent open-domain dialog systems due to the availability of large amounts of conversational data and the recent progress on neural approaches to conversational AI. Unlike traditional task-oriented bots, an open-domain dialog system aims to establish long-term connections with users by satisfying the human need for communication, affection, and social belonging. This paper reviews the recent works on neural approaches that are devoted to addressing three challenges in developing such systems: semantics, consistency, and interactiveness. Semantics requires a dialog system to not only understand the content of the dialog but also identify user's social needs during the conversation. Consistency requires the system to demonstrate a consistent personality to win users trust and gain their long-term confidence. Interactiveness refers to the system's ability to generate interpersonal responses to achieve particular social goals such as entertainment, conforming, and task completion. The works we select to present here is based on our unique views and are by no means complete. Nevertheless, we hope that the discussion will inspire new research in developing more intelligent dialog systems.

北京阿比特科技有限公司