亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Quantum teleportation allows one to transmit an arbitrary qubit from point A to point B using a pair of (pre-shared) entangled qubits and classical bits of information. The conventional protocol for teleportation uses two bits of classical information and assumes that the sender has access to only one copy of the arbitrary qubit to the sent. Here, we ask whether we can do better than two bits of classical information if the sender has access to multiple copies of the qubit to be teleported. We place no restrictions on the qubit states. Consequently, we propose a modified quantum teleportation protocol that allows Alice to reset the state of the entangled pair to its initial state using only local operations. As a result, the proposed teleportation protocol requires the transmission of only one classical bit with a probability greater than one-half. This has implications for efficient quantum communications and security of quantum cryptographic protocols based on quantum entanglement.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · Extensibility · Guidance · 可理解性 · Better ·
2022 年 4 月 20 日

In the coming years, quantum networks will allow quantum applications to thrive thanks to the new opportunities offered by end-to-end entanglement of qubits on remote hosts via quantum repeaters. On a geographical scale, this will lead to the dawn of the Quantum Internet. While a full-blown deployment is yet to come, the research community is already working on a variety of individual enabling technologies and solutions. In this paper, with the guidance of extensive simulations, we take a broader view and investigate the problems of Quality of Service (QoS) and provisioning in the context of quantum networks, which are very different from their counterparts in classical data networks due to some of their fundamental properties. Our work leads the way towards a new class of studies that will allow the research community to better understand the challenges of quantum networks and their potential commercial exploitation.

Quantum communications is a promising technology that will play a fundamental role in the design of future networks. In fact, significant efforts are being dedicated by both the quantum physics and the classical communications communities on developing new architectures, solutions, and practical implementations of quantum communication networks (QCNs). Although these efforts led to various advances in today's technologies, there still exists a non-trivial gap between the research efforts of the two communities on designing and optimizing the QCN performance. For instance, most prior works by the classical communications community ignore important quantum physics-based constraints when designing QCNs. For example, many works on entanglement distribution do not account for the decoherence of qubits inside quantum memories and, thus, their designs become impractical since they assume an infinite quantum states' lifetime. In this paper, we introduce a novel framework, dubbed physics-informed QCNs, for designing and analyzing the performance of QCNs, by relying on the quantum physics principles that underly the different QCN components. The need of the proposed approach is then assessed and its fundamental role in designing practical QCNs is analyzed across various open research areas. Moreover, we identify novel physics-informed performance metrics and controls that enable QCNs to leverage the state-of-the-art advancements in quantum technologies to enhance their performance. Finally, we analyze multiple pressing challenges and open research directions in QCNs that must be treated using a physics-informed approach to lead practically viable results. Ultimately, this work attempts to bridge the gap between the classical communications and the quantum physics communities in the area of QCNs to foster the development of future communication networks (6G and beyond, and the quantum Internet).

In this work a quantum analogue of Bayesian statistical inference is considered. Based on the notion of instrument, we propose a sequential measurement scheme from which observations needed for statistical inference are obtained. We further put forward a quantum analogue of Bayes rule, which states how the prior normal state of a quantum system updates under those observations. We next generalize the fundamental notions and results of Bayesian statistics according to the quantum Bayes rule. It is also note that our theory retains the classical one as its special case. Finally, we investigate the limit of posterior normal state as the number of observations tends to infinity.

In the upcoming 6G era, existing terrestrial networks have evolved toward space-air-ground integrated networks (SAGIN), providing ultra-high data rates, seamless network coverage, and ubiquitous intelligence for communications of applications and services. However, conventional communications in SAGIN still face data confidentiality issues. Fortunately, the concept of Quantum Key Distribution (QKD) over SAGIN is able to provide information-theoretic security for secure communications in SAGIN with quantum cryptography. Therefore, in this paper, we propose the quantum-secured SAGIN which is feasible to achieve proven secure communications using quantum mechanics to protect data channels between space, air, and ground nodes. Moreover, we propose a universal QKD service provisioning framework to minimize the cost of QKD services under the uncertainty and dynamics of communications in quantum-secured SAGIN. In this framework, fiber-based QKD services are deployed in passive optical networks with the advantages of low loss and high stability. Moreover, the widely covered and flexible satellite- and UAV-based QKD services are provisioned as a supplement during the real-time data transmission phase. Finally, to examine the effectiveness of the proposed concept and framework, a case study of quantum-secured SAGIN in the Metaverse is conducted where uncertain and dynamic factors of the secure communications in Metaverse applications are effectively resolved in the proposed framework.

We study online convex optimization with switching costs, a practically important but also extremely challenging problem due to the lack of complete offline information. By tapping into the power of machine learning (ML) based optimizers, ML-augmented online algorithms (also referred to as expert calibration in this paper) have been emerging as state of the art, with provable worst-case performance guarantees. Nonetheless, by using the standard practice of training an ML model as a standalone optimizer and plugging it into an ML-augmented algorithm, the average cost performance can be even worse than purely using ML predictions. In order to address the "how to learn" challenge, we propose EC-L2O (expert-calibrated learning to optimize), which trains an ML-based optimizer by explicitly taking into account the downstream expert calibrator. To accomplish this, we propose a new differentiable expert calibrator that generalizes regularized online balanced descent and offers a provably better competitive ratio than pure ML predictions when the prediction error is large. For training, our loss function is a weighted sum of two different losses -- one minimizing the average ML prediction error for better robustness, and the other one minimizing the post-calibration average cost. We also provide theoretical analysis for EC-L2O, highlighting that expert calibration can be even beneficial for the average cost performance and that the high-percentile tail ratio of the cost achieved by EC-L2O to that of the offline optimal oracle (i.e., tail cost ratio) can be bounded. Finally, we test EC-L2O by running simulations for sustainable datacenter demand response. Our results demonstrate that EC-L2O can empirically achieve a lower average cost as well as a lower competitive ratio than the existing baseline algorithms.

While utilization of digital agents to support crucial decision making is increasing, trust in suggestions made by these agents is hard to achieve. However, it is essential to profit from their application, resulting in a need for explanations for both the decision making process and the model. For many systems, such as common black-box models, achieving at least some explainability requires complex post-processing, while other systems profit from being, to a reasonable extent, inherently interpretable. We propose a rule-based learning system specifically conceptualised and, thus, especially suited for these scenarios. Its models are inherently transparent and easily interpretable by design. One key innovation of our system is that the rules' conditions and which rules compose a problem's solution are evolved separately. We utilise independent rule fitnesses which allows users to specifically tailor their model structure to fit the given requirements for explainability.

We study the problem of testing whether a function $f: \mathbb{R}^n \to \mathbb{R}$ is a polynomial of degree at most $d$ in the \emph{distribution-free} testing model. Here, the distance between functions is measured with respect to an unknown distribution $\mathcal{D}$ over $\mathbb{R}^n$ from which we can draw samples. In contrast to previous work, we do not assume that $\mathcal{D}$ has finite support. We design a tester that given query access to $f$, and sample access to $\mathcal{D}$, makes $(d/\varepsilon)^{O(1)}$ many queries to $f$, accepts with probability $1$ if $f$ is a polynomial of degree $d$, and rejects with probability at least $2/3$ if every degree-$d$ polynomial $P$ disagrees with $f$ on a set of mass at least $\varepsilon$ with respect to $\mathcal{D}$. Our result also holds under mild assumptions when we receive only a polynomial number of bits of precision for each query to $f$, or when $f$ can only be queried on rational points representable using a logarithmic number of bits. Along the way, we prove a new stability theorem for multivariate polynomials that may be of independent interest.

Inspired by Hosoyamada et al.'s work [14], we propose a new quantum meet-in-the-middle (QMITM) attack on $r$-round ($r \ge 7$) Feistel construction to reduce the time complexity. Similar to Hosoyamada et al.'s work, our attack on 7-round Feistel is also based on Guo et al.'s classical meet-in-the-middle (MITM) attack [13]. The classic MITM attack consumes a lot of time mainly in three aspects: construct the lookup table, query data and find a match. Therefore, parallel Grover search processors are used to reduce the time of constructing the lookup table. And we adjust the truncated differentials of the 5-round distinguisher proposed by Guo et al. to balance the complexities between constructing the lookup table and querying data. Finally, we introduce a quantum claw finding algorithm to find a match for reducing time. The subkeys can be recovered by this match. Furthermore, for $r$-round ($r > 7$) Feistel construction, we treat the above attack on the first 7 rounds as an inner loop and use Grover's algorithm to search the last $r-7$ rounds of subkeys as an outer loop. In summary, the total time complexity of our attack on $r$-round ($r \ge 7$) is only $O(2^{2n/3+(r-7)n/4})$ less than classical and quantum attacks. Moreover, our attack belongs to Q1 model and is more practical than other quantum attacks.

The performance of a quantum information processing protocol is ultimately judged by distinguishability measures that quantify how distinguishable the actual result of the protocol is from the ideal case. The most prominent distinguishability measures are those based on the fidelity and trace distance, due to their physical interpretations. In this paper, we propose and review several algorithms for estimating distinguishability measures based on trace distance and fidelity. The algorithms can be used for distinguishing quantum states, channels, and strategies (the last also known in the literature as "quantum combs"). The fidelity-based algorithms offer novel physical interpretations of these distinguishability measures in terms of the maximum probability with which a single prover (or competing provers) can convince a verifier to accept the outcome of an associated computation. We simulate many of these algorithms by using a variational approach with parameterized quantum circuits. We find that the simulations converge well in both the noiseless and noisy scenarios, for all examples considered. Furthermore, the noisy simulations exhibit a parameter noise resilience.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司