亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We designed two rules of binary quantum computed vote: Quantum Logical Veto (QLV) and Quantum Logical Nomination (QLN). The conjunction and disjunction from quantum computational logic are used to define QLV and QLN, respectively. Compared to classical vote, quantum computed vote is fairer, more democratic and has stronger expressive power. Since the advantage of quantum computed vote is neither the speed of computing nor the security of communication, we believe it opens a new battlefield in the second quantum revolution. Compared to other rules of quantum computed vote, QLV and QLN have better scalability. Both QLV and QLN can be implemented by the current technology and the difficulty of implementation does not grow with the increase of the number of voters.

相關內容

This technical report contains the proofs to the lemmata and theorems of [15] as well as some additional material. The main contributions of [15] are the analysis of the applicability of several quality criteria for encodings within a quantum based setting and a discussion on new, quantum specific criteria. Therefore, an encoding from one quantum based process calculi into another is presented and the quality criteria are applied to it. The separation result proves the absence of an encoding the other way around.

Quantum communications is a promising technology that will play a fundamental role in the design of future networks. In fact, significant efforts are being dedicated by both the quantum physics and the classical communications communities on developing new architectures, solutions, and practical implementations of quantum communication networks (QCNs). Although these efforts led to various advances in today's technologies, there still exists a non-trivial gap between the research efforts of the two communities on designing and optimizing the QCN performance. For instance, most prior works by the classical communications community ignore important quantum physics-based constraints when designing QCNs. For example, many works on entanglement distribution do not account for the decoherence of qubits inside quantum memories and, thus, their designs become impractical since they assume an infinite quantum states' lifetime. In this paper, we introduce a novel framework, dubbed physics-informed QCNs, for designing and analyzing the performance of QCNs, by relying on the quantum physics principles that underly the different QCN components. The need of the proposed approach is then assessed and its fundamental role in designing practical QCNs is analyzed across various open research areas. Moreover, we identify novel physics-informed performance metrics and controls that enable QCNs to leverage the state-of-the-art advancements in quantum technologies to enhance their performance. Finally, we analyze multiple pressing challenges and open research directions in QCNs that must be treated using a physics-informed approach to lead practically viable results. Ultimately, this work attempts to bridge the gap between the classical communications and the quantum physics communities in the area of QCNs to foster the development of future communication networks (6G and beyond, and the quantum Internet).

We introduce a new distortion measure for point processes called functional-covering distortion. It is inspired by intensity theory and is related to both the covering of point processes and logarithmic loss distortion. We obtain the distortion-rate function with feedforward under this distortion measure for a large class of point processes. For Poisson processes, the rate-distortion function is obtained under a general condition called constrained functional-covering distortion, of which both covering and functional-covering are special cases. Also for Poisson processes, we characterize the rate-distortion region for a two-encoder CEO problem and show that feedforward does not enlarge this region.

Generating a test suite for a quantum program such that it has the maximum number of failing tests is an optimization problem. For such optimization, search-based testing has shown promising results in the context of classical programs. To this end, we present a test generation tool for quantum programs based on a genetic algorithm, called QuSBT (Search-based Testing of Quantum Programs). QuSBT automates the testing of quantum programs, with the aim of finding a test suite having the maximum number of failing test cases. QuSBT utilizes IBM's Qiskit as the simulation framework for quantum programs. We present the tool architecture in addition to the implemented methodology (i.e., the encoding of the search individual, the definition of the fitness function expressing the search problem, and the test assessment w.r.t. two types of failures). Finally, we report results of the experiments in which we tested a set of faulty quantum programs with QuSBT to assess its effectiveness. Repository (code and experimental results): //github.com/Simula-COMPLEX/qusbt-tool Video: //youtu.be/3apRCtluAn4

The security of quantum key distribution (QKD) is severely threatened by discrepancies between realistic devices and theoretical assumptions. Recently, a significant framework called the reference technique was proposed to provide security against arbitrary source flaws, including pulse correlations. Here, we propose an efficient four-phase twin-field QKD using laser pulses adopting the reference technique for security against all possible source imperfections. We present a characterization of source flaws and connect them to experimental data, together with a finite-key analysis. In addition, we demonstrate the feasibility of our protocol through a proof-of-principle experimental implementation and demonstrate a secure key rate of 1.63 kbps with a 20 dB channel loss. Compared with previous QKD protocols with imperfect devices, our work considerably improves both the secure key rate and the transmission distance, and shows application potential in the practical deployment of secure QKD with device imperfections.

We demonstrate that merely analog transmissions and match filtering can realize the function of an edge server in federated learning (FL). Therefore, a network with massively distributed user equipments (UEs) can achieve large-scale FL without an edge server. We also develop a training algorithm that allows UEs to continuously perform local computing without being interrupted by the global parameter uploading, which exploits the full potential of UEs' processing power. We derive convergence rates for the proposed schemes to quantify their training efficiency. The analyses reveal that when the interference obeys a Gaussian distribution, the proposed algorithm retrieves the convergence rate of a server-based FL. But if the interference distribution is heavy-tailed, then the heavier the tail, the slower the algorithm converges. Nonetheless, the system run time can be largely reduced by enabling computation in parallel with communication, whereas the gain is particularly pronounced when communication latency is high. These findings are corroborated via excessive simulations.

After spending 9 years in Quantum Computing and given the impending timeline of developing good quality quantum processing units, it is the moment to rethink the approach to advance quantum computing research. Rather than waiting for quantum hardware technologies to mature, we need to start assessing in tandem the impact of the occurrence of quantum computing in various scientific fields. However, for this purpose, we need to use a complementary but quite different approach than proposed by the NISQ vision, which is heavily focused on and burdened by the engineering challenges. That is why we propose and advocate the PISQ-approach: Perfect Intermediate-Scale Quantum computing based on the already known concept of perfect qubits. This will allow researchers to focus much more on the development of new applications by defining the algorithms in terms of perfect qubits and evaluating them on quantum computing simulators that are executed on supercomputers. It is not a long-term solution but it will allow universities to currently develop research on quantum logic and algorithms and companies can already start developing their internal know-how on quantum solutions.

The performance of a quantum information processing protocol is ultimately judged by distinguishability measures that quantify how distinguishable the actual result of the protocol is from the ideal case. The most prominent distinguishability measures are those based on the fidelity and trace distance, due to their physical interpretations. In this paper, we propose and review several algorithms for estimating distinguishability measures based on trace distance and fidelity. The algorithms can be used for distinguishing quantum states, channels, and strategies (the last also known in the literature as "quantum combs"). The fidelity-based algorithms offer novel physical interpretations of these distinguishability measures in terms of the maximum probability with which a single prover (or competing provers) can convince a verifier to accept the outcome of an associated computation. We simulate many of these algorithms by using a variational approach with parameterized quantum circuits. We find that the simulations converge well in both the noiseless and noisy scenarios, for all examples considered. Furthermore, the noisy simulations exhibit a parameter noise resilience.

Upcoming HEP experiments, e.g. at the HL-LHC, are expected to increase the volume of generated data by at least one order of magnitude. In order to retain the ability to analyze the influx of data, full exploitation of modern storage hardware and systems, such as low-latency high-bandwidth NVMe devices and distributed object stores, becomes critical. To this end, the ROOT RNTuple I/O subsystem has been designed to address performance bottlenecks and shortcomings of ROOT's current state of the art TTree I/O subsystem. RNTuple provides a backwards-incompatible redesign of the TTree binary format and access API that evolves the ROOT event data I/O for the challenges of the upcoming decades. It focuses on a compact data format, on performance engineering for modern storage hardware, for instance through making parallel and asynchronous I/O calls by default, and on robust interfaces that are easy to use correctly. In this contribution, we evaluate the RNTuple performance for typical HEP analysis tasks. We compare the throughput delivered by RNTuple to popular I/O libraries outside HEP, such as HDF5 and Apache Parquet. We demonstrate the advantages of RNTuple for HEP analysis workflows and provide an outlook on the road to its use in production.

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

北京阿比特科技有限公司