亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We investigate an unsuspected connection between logical connectives with non-harmonious deduction rules, such as Prior's tonk, and quantum computing. We argue these connectives model the information-erasure, the non-reversibility, and the non-determinism that occur, among other places, in quantum measurement. We introduce a propositional logic with a logical connective sup that has non-harmonious deduction rules and also with two interstitial rules, and show that the proof language of this logic forms the core of a quantum programming language.

相關內容

In the context of 6th generation (6G) networks, vehicular edge computing (VEC) is emerging as a promising solution to let battery-powered ground vehicles with limited computing and storage resources offload processing tasks to more powerful devices. Given the dynamic vehicular environment, VEC systems need to be as flexible, intelligent, and adaptive as possible. To this aim, in this paper we study the opportunity to realize VEC via non-terrestrial networks (NTNs), where ground vehicles offload resource-hungry tasks to Unmanned Aerial Vehicles (UAVs), High Altitude Platforms (HAPs), or a combination of the two. We define an optimization problem in which tasks are modeled as a Poisson arrival process, and apply queuing theory to find the optimal offloading factor in the system. Numerical results show that aerial-assisted VEC is feasible even in dense networks, provided that high-capacity HAP/UAV platforms are available.

In this paper, we study the problem of learning an unknown quantum circuit of a certain structure. If the unknown target is an $n$-qubit Clifford circuit, we devise an efficient algorithm to reconstruct its circuit representation by using $O(n^2)$ queries to it. For decades, it has been unknown how to handle circuits beyond the Clifford group since the stabilizer formalism cannot be applied in this case. Herein, we study quantum circuits of $T$-depth one on the computational basis. We show that the output state of a $T$-depth one circuit can be represented by a stabilizer pseudomixture with a specific algebraic structure. Using Pauli and Bell measurements on copies of the output states, we can generate a hypothesis circuit that is equivalent to the unknown target circuit on computational basis states as input. If the number of $T$ gates of the target is of the order $O({{\log n}})$, our algorithm requires $O(n^2)$ queries to it and produces its equivalent circuit representation on the computational basis in time $O(n^3)$. Using further additional $O(4^{3n})$ classical computations, we can derive an exact description of the target for arbitrary input states. Our results greatly extend the previously known facts that stabilizer states can be efficiently identified based on the stabilizer formalism.

Networked-Control Systems (NCSs), a type of cyber-physical systems, consist of tightly integrated computing, communication and control technologies. While being very flexible environments, they are vulnerable to computing and networking attacks. Recent NCSs hacking incidents had major impact. They call for more research on cyber-physical security. Fears about the use of quantum computing to break current cryptosystems make matters worse. While the quantum threat motivated the creation of new disciplines to handle the issue, such as post-quantum cryptography, other fields have overlooked the existence of quantum-enabled adversaries. This is the case of cyber-physical defense research, a distinct but complementary discipline to cyber-physical protection. Cyber-physical defense refers to the capability to detect and react in response to cyber-physical attacks. Concretely, it involves the integration of mechanisms to identify adverse events and prepare response plans, during and after incidents occur. In this paper, we make the assumption that the eventually available quantum computer will provide an advantage to adversaries against defenders, unless they also adopt this technology. We envision the necessity for a paradigm shift, where an increase of adversarial resources because of quantum supremacy does not translate into higher likelihood of disruptions. Consistently with current system design practices in other areas, such as the use of artificial intelligence for the reinforcement of attack detection tools, we outline a vision for next generation cyber-physical defense layers leveraging ideas from quantum computing and machine learning. Through an example, we show that defenders of NCSs can learn and improve their strategies to anticipate and recover from attacks.

Machine Learning (ML) has become a fast-growing, trending approach in solution development in practice. Deep Learning (DL) which is a subset of ML, learns using deep neural networks to simulate the human brain. It trains machines to learn techniques and processes individually using computer algorithms, which is also considered to be a role of Artificial Intelligence (AI). In this paper, we study current technical issues related to software development and delivery in organizations that work on ML projects. Therefore, the importance of the Machine Learning Operations (MLOps) concept, which can deliver appropriate solutions for such concerns, is discussed. We investigate commercially available MLOps tool support in software development. The comparison between MLOps tools analyzes the performance of each system and its use cases. Moreover, we examine the features and usability of MLOps tools to identify the most appropriate tool support for given scenarios. Finally, we recognize that there is a shortage in the availability of a fully functional MLOps platform on which processes can be automated by reducing human intervention.

Quantum computing systems rely on the principles of quantum mechanics to perform a multitude of computationally challenging tasks more efficiently than their classical counterparts. The architecture of software-intensive systems can empower architects who can leverage architecture-centric processes, practices, description languages, etc., to model, develop, and evolve quantum computing software (quantum software for short) at higher abstraction levels. We conducted a systematic literature review (SLR) to investigate (i) architectural process, (ii) modeling notations, (iii) architecture design patterns, (iv) tool support, and (iv) challenging factors for quantum software architecture. Results of the SLR indicate that quantum software represents a new genre of software-intensive systems; however, existing processes and notations can be tailored to derive the architecting activities and develop modeling languages for quantum software. Quantum bits (Qubits) mapped to Quantum gates (Qugates) can be represented as architectural components and connectors that implement quantum software. Tool-chains can incorporate reusable knowledge and human roles (e.g., quantum domain engineers, quantum code developers) to automate and customize the architectural process. Results of this SLR can facilitate researchers and practitioners to develop new hypotheses to be tested, derive reference architectures, and leverage architecture-centric principles and practices to engineer emerging and next generations of quantum software.

The ZX-calculus is a graphical language for reasoning about quantum computation using ZX-diagrams, a certain flexible generalisation of quantum circuits that can be used to represent linear maps from $m$ to $n$ qubits for any $m,n \geq 0$. Some applications for the ZX-calculus, such as quantum circuit optimisation and synthesis, rely on being able to efficiently translate a ZX-diagram back into a quantum circuit of comparable size. While several sufficient conditions are known for describing families of ZX-diagrams that can be efficiently transformed back into circuits, it has previously been conjectured that the general problem of circuit extraction is hard. That is, that it should not be possible to efficiently convert an arbitrary ZX-diagram describing a unitary linear map into an equivalent quantum circuit. In this paper we prove this conjecture by showing that the circuit extraction problem is #P-hard, and so is itself at least as hard as strong simulation of quantum circuits. In addition to our main hardness result, which relies specifically on the circuit representation, we give a representation-agnostic hardness result. Namely, we show that any oracle that takes as input a ZX-diagram description of a unitary and produces samples of the output of the associated quantum computation enables efficient probabilistic solutions to NP-complete problems.

Large, pre-trained transformer-based language models such as BERT have drastically changed the Natural Language Processing (NLP) field. We present a survey of recent work that uses these large language models to solve NLP tasks via pre-training then fine-tuning, prompting, or text generation approaches. We also present approaches that use pre-trained language models to generate data for training augmentation or other purposes. We conclude with discussions on limitations and suggested directions for future research.

This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning". Unlike traditional supervised learning, which trains a model to take in an input x and predict an output y as P(y|x), prompt-based learning is based on language models that model the probability of text directly. To use these models to perform prediction tasks, the original input x is modified using a template into a textual string prompt x' that has some unfilled slots, and then the language model is used to probabilistically fill the unfilled information to obtain a final string x, from which the final output y can be derived. This framework is powerful and attractive for a number of reasons: it allows the language model to be pre-trained on massive amounts of raw text, and by defining a new prompting function the model is able to perform few-shot or even zero-shot learning, adapting to new scenarios with few or no labeled data. In this paper we introduce the basics of this promising paradigm, describe a unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g.the choice of pre-trained models, prompts, and tuning strategies. To make the field more accessible to interested beginners, we not only make a systematic review of existing works and a highly structured typology of prompt-based concepts, but also release other resources, e.g., a website //pretrain.nlpedia.ai/ including constantly-updated survey, and paperlist.

The notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology. In line with the statistical tradition, uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions. Yet, due to the steadily increasing relevance of machine learning for practical applications and related issues such as safety requirements, new problems and challenges have recently been identified by machine learning scholars, and these problems may call for new methodological developments. In particular, this includes the importance of distinguishing between (at least) two different types of uncertainty, often refereed to as aleatoric and epistemic. In this paper, we provide an introduction to the topic of uncertainty in machine learning as well as an overview of hitherto attempts at handling uncertainty in general and formalizing this distinction in particular.

In this monograph, I introduce the basic concepts of Online Learning through a modern view of Online Convex Optimization. Here, online learning refers to the framework of regret minimization under worst-case assumptions. I present first-order and second-order algorithms for online learning with convex losses, in Euclidean and non-Euclidean settings. All the algorithms are clearly presented as instantiation of Online Mirror Descent or Follow-The-Regularized-Leader and their variants. Particular attention is given to the issue of tuning the parameters of the algorithms and learning in unbounded domains, through adaptive and parameter-free online learning algorithms. Non-convex losses are dealt through convex surrogate losses and through randomization. The bandit setting is also briefly discussed, touching on the problem of adversarial and stochastic multi-armed bandits. These notes do not require prior knowledge of convex analysis and all the required mathematical tools are rigorously explained. Moreover, all the proofs have been carefully chosen to be as simple and as short as possible.

小貼士
登錄享
相關主題
北京阿比特科技有限公司