Quantum entanglement distribution between remote nodes is key to many promising quantum applications. Existing mechanisms have mainly focused on improving throughput and fidelity via entanglement routing or single-node scheduling. This paper considers entanglement scheduling and distribution among many source-destination pairs with different requests over an entire quantum network topology. Two practical scenarios are considered. When requests do not have deadlines, we seek to minimize the average completion time of the communication requests. If deadlines are specified, we seek to maximize the number of requests whose deadlines are met. Inspired by optimal scheduling disciplines in conventional single-queue scenarios, we design a general optimization framework for entanglement scheduling and distribution called ESDI, and develop a probabilistic protocol to implement the optimized solutions in a general buffered quantum network. We develop a discrete-time quantum network simulator for evaluation. Results show the superior performance of ESDI compared to existing solutions.
A generalized strategy for the design of intelligent robust control systems based on quantum / soft computing technologies is described. The reliability of hybrid intelligent controllers increase by providing the ability to self-organize of imperfect knowledge bases. The main attention is paid to increasing the level of robustness of intelligent control systems in unpredictable control situations with the demonstration by illustrative examples. A SW & HW platform and support tools for a supercomputer accelerator for modeling quantum algorithms on a classical computer are described.
Fault-tolerant consensus is about reaching agreement on some of the input values in a limited time by non-faulty autonomous processes, despite of failures of processes or communication medium. This problem is particularly challenging and costly against an adaptive adversary with full information. Bar-Joseph and Ben-Or (PODC'98) were the first who proved an absolute lower bound $\Omega(\sqrt{n/\log n})$ on expected time complexity of consensus in any classic (i.e., randomized or deterministic) message-passing network with $n$ processes succeeding with probability $1$ against such a strong adaptive adversary crashing processes. Seminal work of Ben-Or and Hassidim (STOC'05) broke the $\Omega(\sqrt{n/\log n})$ barrier for consensus in classic (deterministic and randomized) networks by employing quantum computing. They showed an (expected) constant-time quantum algorithm for a linear number of crashes $t<n/3$. In this paper, we improve upon that seminal work by reducing the number of quantum and communication bits to an arbitrarily small polynomial, and even more, to a polylogarithmic number -- though, the latter in the cost of a slightly larger polylogarithmic time (still exponentially smaller than the time lower bound $\Omega(\sqrt{n/\log n})$ for classic computation).
The IoT's vulnerability to network attacks has motivated the design of intrusion detection schemes (IDS) using Machine Learning (ML), with a low computational cost for online detection but intensive offline learning. Such IDS can have high attack detection accuracy and are easily installed on servers that communicate with IoT devices. However, they are seldom evaluated in realistic operational conditions where IDS processing may be held up by the system overload created by attacks. Thus we first present an experimental study of UDP Flood Attacks on a Local Area Network Test-Bed, where the first line of defence is an accurate IDS using an Auto-Associative Dense Random Neural Network. The experiments reveal that during severe attacks, the packet and protocol management software overloads the multi-core server, and paralyses IDS detection. We therefore propose and experimentally evaluate an IDS design where decisions are made from a very small number of incoming packets, so that attacking traffic is dropped within milli-seconds after an attack begins and the paralysing effect of congestion is avoided.
The approximate stabilizer rank of a quantum state is the minimum number of terms in any approximate decomposition of that state into stabilizer states. Bravyi and Gosset showed that the approximate stabilizer rank of a so-called "magic" state like $|T\rangle^{\otimes n}$, up to polynomial factors, is an upper bound on the number of classical operations required to simulate an arbitrary quantum circuit with Clifford gates and $n$ number of $T$ gates. As a result, an exponential lower bound on this quantity seems inevitable. Despite this intuition, several attempts using various techniques could not lead to a better than a linear lower bound on the "exact" rank of $|T\rangle^{\otimes n}$, meaning the minimal size of a decomposition that exactly produces the state. However, an "approximate" rank is more realistically related to the cost of simulating quantum circuits because exact rank is not robust to errors; there are quantum states with exponentially large exact ranks but constant approximate ranks even with arbitrarily small approximation parameters. No lower bound better than $\tilde \Omega(\sqrt n)$ has been known for the approximate rank. In this paper, we improve this lower bound to $\tilde \Omega (n)$ for a wide range of the approximation parameters. Our approach is based on a strong lower bound on the approximate rank of a quantum state sampled from the Haar measure and a step-by-step analysis of the approximate rank of a magic-state teleportation protocol to sample from the Haar measure.
We establish a coding theorem for rate-limited quantum-classical optimal transport systems with limited classical common randomness. This theorem characterizes the rate region of measurement protocols on a product source state for faithful construction of a given destination state while maintaining the source-destination distortion below a prescribed threshold with respect to a general distortion observable. It also provides a solution to the problem of rate-limited optimal transport, which aims to find the optimal cost of transforming a source quantum state to a destination state via an entanglement-breaking channel with a limited communication rate. The coding theorem is further extended to cover Bosonic continuous-variable quantum systems. The analytical evaluation is performed for the case of a qubit measurement system with unlimited common randomness.
Integrated visible light positioning and communication (VLPC), capable of combining advantages of visible light communications (VLC) and visible light positioning (VLP), is a promising key technology for the future Internet of Things. In VLPC networks, positioning and communications are inherently coupled, which has not been sufficiently explored in the literature. We propose a robust power allocation scheme for integrated VLPC Networks by exploiting the intrinsic relationship between positioning and communications. Specifically, we derive explicit relationships between random positioning errors, following both a Gaussian distribution and an arbitrary distribution, and channel state information errors. Then, we minimize the Cramer-Rao lower bound (CRLB) of positioning errors, subject to the rate outage constraint and the power constraints, which is a chance-constrained optimization problem and generally computationally intractable. To circumvent the nonconvex challenge, we conservatively transform the chance constraints to deterministic forms by using the Bernstein-type inequality and the conditional value-at-risk for the Gaussian and arbitrary distributed positioning errors, respectively, and then approximate them as convex semidefinite programs. Finally, simulation results verify the robustness and effectiveness of our proposed integrated VLPC design schemes.
Mutual distance bounding (DB) protocols enable two distrusting parties to establish an upper-bound on the distance between them. DB has been so far mainly considered in classical settings and for classical applications, especially in wireless settings, e.g., to prevent relay attacks in wireless authentication and access control systems, and for secure localization. While recent research has started exploring DB in quantum settings, all current quantum DB (QDB) protocols employ quantum-bits (qubits) in the rapid-bit exchange phase and only perform one-way DB. Specifically, the latest QDB proposals improve the initial ones by adding resistance to photon number splitting attacks, and improving round complexity by avoiding communication from the prover to the verifier in the last authentication phase. This paper presents two new QDB protocols that differ from previously proposed protocols in several aspects: (1) to the best of our knowledge, our protocols are the first to utilize entangled qubits in the rapid-bit exchange phase, previous protocols relied on sending individual qubits, not those from a pair of entangled ones; (2) our second protocol can perform mutual QDB between two parties in one execution, previous QDB protocols had to be executed twice with the prover and verifier roles reversed in each execution; (3) the use of entangled qubits in our protocols thwarts attacks that previous QDB protocols were prone to; (4) and finally, our protocols also eliminate the need for communication from the prover to the verifier in the last authentication phase, which was necessary in some previous QDB protocols. Our work paves the way for several interesting research directions which we briefly discuss in detail in the appendix.
The Noisy-SGD algorithm is widely used for privately training machine learning models. Traditional privacy analyses of this algorithm assume that the internal state is publicly revealed, resulting in privacy loss bounds that increase indefinitely with the number of iterations. However, recent findings have shown that if the internal state remains hidden, then the privacy loss might remain bounded. Nevertheless, this remarkable result heavily relies on the assumption of (strong) convexity of the loss function. It remains an important open problem to further relax this condition while proving similar convergent upper bounds on the privacy loss. In this work, we address this problem for DP-SGD, a popular variant of Noisy-SGD that incorporates gradient clipping to limit the impact of individual samples on the training process. Our findings demonstrate that the privacy loss of projected DP-SGD converges exponentially fast, without requiring convexity or smoothness assumptions on the loss function. In addition, we analyze the privacy loss of regularized (unprojected) DP-SGD. To obtain these results, we directly analyze the hockey-stick divergence between coupled stochastic processes by relying on non-linear data processing inequalities.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.
In recent years, DBpedia, Freebase, OpenCyc, Wikidata, and YAGO have been published as noteworthy large, cross-domain, and freely available knowledge graphs. Although extensively in use, these knowledge graphs are hard to compare against each other in a given setting. Thus, it is a challenge for researchers and developers to pick the best knowledge graph for their individual needs. In our recent survey, we devised and applied data quality criteria to the above-mentioned knowledge graphs. Furthermore, we proposed a framework for finding the most suitable knowledge graph for a given setting. With this paper we intend to ease the access to our in-depth survey by presenting simplified rules that map individual data quality requirements to specific knowledge graphs. However, this paper does not intend to replace our previously introduced decision-support framework. For an informed decision on which KG is best for you we still refer to our in-depth survey.