亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Secure multi-party computation (SMPC) protocols allow several parties that distrust each other to collectively compute a function on their inputs. In this paper, we introduce a protocol that lifts classical SMPC to quantum SMPC in a composably and statistically secure way, even for a single honest party. Unlike previous quantum SMPC protocols, our proposal only requires very limited quantum resources from all but one party; it suffices that the weak parties, i.e. the clients, are able to prepare single-qubit states in the X-Y plane. The novel quantum SMPC protocol is constructed in a naturally modular way, and relies on a new technique for quantum verification that is of independent interest. This verification technique requires the remote preparation of states only in a single plane of the Bloch sphere. In the course of proving the security of the new verification protocol, we also uncover a fundamental invariance that is inherent to measurement-based quantum computing.

相關內容

We consider a quantum and classical version multi-party function computation problem with $n$ players, where players $2, \dots, n$ need to communicate appropriate information to player 1, so that a ``generalized'' inner product function with an appropriate promise can be calculated. The communication complexity of a protocol is the total number of bits that need to be communicated. When $n$ is prime and for our chosen function, we exhibit a quantum protocol (with complexity $(n-1) \log n$ bits) and a classical protocol (with complexity $(n-1)^2 (\log n^2$) bits). In the quantum protocol, the players have access to entangled qudits but the communication is still classical. Furthermore, we present an integer linear programming formulation for determining a lower bound on the classical communication complexity. This demonstrates that our quantum protocol is strictly better than classical protocols.

Fitness functions map large combinatorial spaces of biological sequences to properties of interest. Inferring these multimodal functions from experimental data is a central task in modern protein engineering. Global epistasis models are an effective and physically-grounded class of models for estimating fitness functions from observed data. These models assume that a sparse latent function is transformed by a monotonic nonlinearity to emit measurable fitness. Here we demonstrate that minimizing contrastive loss functions, such as the Bradley-Terry loss, is a simple and flexible technique for extracting the sparse latent function implied by global epistasis. We argue by way of a fitness-epistasis uncertainty principle that the nonlinearities in global epistasis models can produce observed fitness functions that do not admit sparse representations, and thus may be inefficient to learn from observations when using a Mean Squared Error (MSE) loss (a common practice). We show that contrastive losses are able to accurately estimate a ranking function from limited data even in regimes where MSE is ineffective. We validate the practical utility of this insight by showing contrastive loss functions result in consistently improved performance on benchmark tasks.

Oracle networks feeding off-chain information to a blockchain are required to solve a distributed agreement problem since these networks receive information from multiple sources and at different times. We make a key observation that in most cases, the value obtained by oracle network nodes from multiple information sources are in close proximity. We define a notion of agreement distance and leverage the availability of a state machine replication (SMR) service to solve this distributed agreement problem with an honest simple majority of nodes instead of the conventional requirement of an honest super majority of nodes. Values from multiple nodes being in close proximity, therefore, forming a coherent cluster, is one of the keys to its efficiency. Our asynchronous protocol also embeds a fallback mechanism if the coherent cluster formation fails. Through simulations using real-world exchange data from seven prominent exchanges, we show that even for very small agreement distance values, the protocol would be able to form coherent clusters and therefore, can safely tolerate up to $1/2$ fraction of Byzantine nodes. We also show that, for a small statistical error, it is possible to choose the size of the oracle network to be significantly smaller than the entire system tolerating up to a $1/3$ fraction of Byzantine failures. This allows the oracle network to operate much more efficiently and horizontally scale much better.

Multipath QUIC is a transport protocol that allows for the use of multiple network interfaces for a single connection. It thereby offers, on the one hand, the possibility to gather a higher throughput, while, on the other hand, multiple paths can also be used to transmit data redundantly. Selective redundancy combines these two applications and thereby offers the potential to transmit time-critical data. This paper considers scenarios where data with real-time requirements are transmitted redundantly while at the same time, non-critical data should make use of the aggregated throughput. A new model called congestion window reservation is proposed, which enables an immediate transmission of time-critical data. The performance of this method and its combination with selective redundancy is evaluated using emulab with real data. The results show that this technique leads to a smaller end-to-end latency and reliability for periodically generated priority data.

We study the connections between sorting and the binary search tree (BST) model, with an aim towards showing that the fields are connected more deeply than is currently appreciated. While any BST can be used to sort by inserting the keys one-by-one, this is a very limited relationship and importantly says nothing about parallel sorting. We show what we believe to be the first formal relationship between the BST model and sorting. Namely, we show that a large class of sorting algorithms, which includes mergesort, quicksort, insertion sort, and almost every instance-optimal sorting algorithm, are equivalent in cost to offline BST algorithms. Our main theoretical tool is the geometric interpretation of the BST model introduced by Demaine et al., which finds an equivalence between searches on a BST and point sets in the plane satisfying a certain property. To give an example of the utility of our approach, we introduce the log-interleave bound, a measure of the information-theoretic complexity of a permutation $\pi$, which is within a $\lg \lg n$ multiplicative factor of a known lower bound in the BST model; we also devise a parallel sorting algorithm with polylogarithmic span that sorts a permutation $\pi$ using comparisons proportional to its log-interleave bound. Our aforementioned result on sorting and offline BST algorithms can be used to show existence of an offline BST algorithm whose cost is within a constant factor of the log-interleave bound of any permutation $\pi$.

The $\Sigma$-QMAC problem is introduced, involving $S$ servers, $K$ classical ($\mathbb{F}_d$) data streams, and $T$ independent quantum systems. Data stream ${\sf W}_k, k\in[K]$ is replicated at a subset of servers $\mathcal{W}(k)\subset[S]$, and quantum system $\mathcal{Q}_t, t\in[T]$ is distributed among a subset of servers $\mathcal{E}(t)\subset[S]$ such that Server $s\in\mathcal{E}(t)$ receives subsystem $\mathcal{Q}_{t,s}$ of $\mathcal{Q}_t=(\mathcal{Q}_{t,s})_{s\in\mathcal{E}(t)}$. Servers manipulate their quantum subsystems according to their data and send the subsystems to a receiver. The total download cost is $\sum_{t\in[T]}\sum_{s\in\mathcal{E}(t)}\log_d|\mathcal{Q}_{t,s}|$ qudits, where $|\mathcal{Q}|$ is the dimension of $\mathcal{Q}$. The states and measurements of $(\mathcal{Q}_t)_{t\in[T]}$ are required to be separable across $t\in[T]$ throughout, but for each $t\in[T]$, the subsystems of $\mathcal{Q}_{t}$ can be prepared initially in an arbitrary (independent of data) entangled state, manipulated arbitrarily by the respective servers, and measured jointly by the receiver. From the measurements, the receiver must recover the sum of all data streams. Rate is defined as the number of dits ($\mathbb{F}_d$ symbols) of the desired sum computed per qudit of download. The capacity of $\Sigma$-QMAC, i.e., the supremum of achievable rates is characterized for arbitrary data and entanglement distributions $\mathcal{W}, \mathcal{E}$. Coding based on the $N$-sum box abstraction is optimal in every case.

Quantum technology is increasingly relying on specialised statistical inference methods for analysing quantum measurement data. This motivates the development of "quantum statistics", a field that is shaping up at the overlap of quantum physics and "classical" statistics. One of the less investigated topics to date is that of statistical inference for infinite dimensional quantum systems, which can be seen as quantum counterpart of non-parametric statistics. In this paper we analyse the asymptotic theory of quantum statistical models consisting of ensembles of quantum systems which are identically prepared in a pure state. In the limit of large ensembles we establish the local asymptotic equivalence (LAE) of this i.i.d. model to a quantum Gaussian white noise model. We use the LAE result in order to establish minimax rates for the estimation of pure states belonging to Hermite-Sobolev classes of wave functions. Moreover, for quadratic functional estimation of the same states we note an elbow effect in the rates, whereas for testing a pure state a sharp parametric rate is attained over the nonparametric Hermite-Sobolev class.

We show that the communication cost of quantum broadcast channel simulation under free entanglement assistance between the sender and the receivers is asymptotically characterized by an efficiently computable single-letter formula in terms of the channel's multipartite mutual information. Our core contribution is a new one-shot achievability result for multipartite quantum state splitting via multipartite convex splitting. As part of this, we face a general instance of the quantum joint typicality problem with arbitrarily overlapping marginals. The crucial technical ingredient to sidestep this difficulty is a conceptually novel multipartite mean-zero decomposition lemma, together with employing recently introduced complex interpolation techniques for sandwiched R\'enyi divergences. Moreover, we establish an exponential convergence of the simulation error when the communication costs are within the interior of the capacity region. As the costs approach the boundary of the capacity region moderately quickly, we show that the error still vanishes asymptotically.

Quantum networks constitute a major part of quantum technologies. They will boost distributed quantum computing drastically by providing a scalable modular architecture of quantum chips, or by establishing an infrastructure for measurement based quantum computing. Moreover, they will provide the backbone of the future quantum internet, allowing for high margins of security. Interestingly, the advantages that the quantum networks would provide for communications, rely on entanglement distribution, which suffers from high latency in protocols based on Bell pair distribution and bipartite entanglement swapping. Moreover, the designed algorithms for multipartite entanglement routing suffer from intractability issues making them unsolvable exactly in polynomial time. In this paper, we investigate a new approach for graph states distribution in quantum networks relying inherently on local quantum coding -- LQC -- isometries and on multipartite states transfer. Additionally, single-shot bounds for stabilizer states distribution are provided. Analogously to network coding, these bounds are shown to be achievable if appropriate isometries/stabilizer codes in relay nodes are chosen, which induces a lower latency entanglement distribution. As a matter of fact, the advantages of the protocol for different figures of merit of the network are provided.

Discrete event systems (DES) have been deeply developed and applied in practice, but state complexity in DES still is an important problem to be better solved with innovative methods. With the development of quantum computing and quantum control, a natural problem is to simulate DES by means of quantum computing models and to establish {\it quantum DES} (QDES). The motivation is twofold: on the one hand, QDES have potential applications when DES are simulated and processed by quantum computers, where quantum systems are employed to simulate the evolution of states driven by discrete events, and on the other hand, QDES may have essential advantages over DES concerning state complexity for imitating some practical problems. So, the goal of this paper is to establish a basic framework of QDES by using {\it quantum finite automata} (QFA) as the modelling formalisms, and the supervisory control theorems of QDES are established and proved. Then we present a polynomial-time algorithm to decide whether or not the controllability condition holds. In particular, we construct a number of new examples of QFA to illustrate the supervisory control of QDES and to verify the essential advantages of QDES over classical DES in state complexity.

北京阿比特科技有限公司