The emergence of noisy intermediate-scale quantum (NISQ) computers has important consequences for cryptographic algorithms. It is theoretically well-established that key algorithms used in cybersecurity are vulnerable to quantum computers due to the fact that theoretical security guarantees, designed based on algorithmic complexity for classical computers, are not sufficient for quantum circuits. Many different quantum algorithms have been developed, which have potentially broad applications on future computing systems. However, this potential depends on the continued maturation of quantum hardware, which remains an area of active research and development. Theoretical limits provide an upper bound on the performance for algorithms. In practice, threats to encryption can only be accurately be assessed in the context of the rapidly evolving hardware and software landscape. Software co-design refers to the concurrent design of software and hardware as a way to understand the limitations of current capabilities and develop effective strategies to advance the state of the art. Since the capabilities for classical computation currently exceed quantum capabilities, quantum emulation techniques can play an important role in the co-design process. In this paper, we describe how the {\em cuQuantum} environment can support quantum algorithm co-design activities using widely-available commodity hardware. We describe how emulation techniques can be used to assess the impact of noise on algorithms of interest, and identify limitations associated with current hardware. We present our analysis in the context of areas of priority for cybersecurity and cryptography in particular since these algorithms are extraordinarily consequential for securing information in the digital world.
Quantum computing has the potential to surpass the capabilities of current classical computers when solving complex problems. Combinatorial optimization has emerged as one of the key target areas for quantum computers as problems found in this field play a critical role in many different industrial application sectors (e.g., enhancing manufacturing operations or improving decision processes). Currently, there are different types of high-performance optimization software (e.g., ILOG CPLEX and Gurobi) that support engineers and scientists in solving optimization problems using classical computers. In order to utilize quantum resources, users require domain-specific knowledge of quantum algorithms, SDKs and libraries, which can be a limiting factor for any practitioner who wants to integrate this technology into their workflows. Our goal is to add software infrastructure to a classical optimization package so that application developers can interface with quantum platforms readily when setting up their workflows. This paper presents a tool for the seamless utilization of quantum resources through a classical interface. Our approach consists of a Python library extension that provides a backend to facilitate access to multiple quantum providers. Our pipeline enables optimization software developers to experiment with quantum resources selectively and assess performance improvements of hybrid quantum-classical optimization solutions.
Quantum networks (QNs) are a promising platform for secure communications, enhanced sensing, and efficient distributed quantum computing. However, due to the fragile nature of quantum states, these networks face significant challenges in terms of scalability. In this paper, the scaling limits of quantum repeater networks (QRNs) are analyzed. The goal of this work is to maximize the overall length, or scalability of QRNs such that long-distance quantum communications is achieved while application-specific quality-of-service (QoS) requirements are satisfied. In particular, a novel joint optimization framework that aims at maximizing QRN scalability, while satisfying QoS constraints on the end-to-end fidelity and rate is proposed. The proposed approach optimizes the number of QRN repeater nodes, their separation distance, and the number of distillation rounds to be performed at both link and end-to-end levels. Extensive simulations are conducted to analyze the tradeoffs between QRN scalability, rate, and fidelity under gate and measurement errors. The obtained results characterize the QRN scaling limits for a given QoS requirement. The proposed approach offers a promising solution and design guidelines for future QRN deployments.
Gaussian graphical models are nowadays commonly applied to the comparison of groups sharing the same variables, by jointy learning their independence structures. We consider the case where there are exactly two dependent groups and the association structure is represented by a family of coloured Gaussian graphical models suited to deal with paired data problems. To learn the two dependent graphs, together with their across-graph association structure, we implement a fused graphical lasso penalty. We carry out a comprehensive analysis of this approach, with special attention to the role played by some relevant submodel classes. In this way, we provide a broad set of tools for the application of Gaussian graphical models to paired data problems. These include results useful for the specification of penalty values in order to obtain a path of lasso solutions and an ADMM algorithm that solves the fused graphical lasso optimization problem. Finally, we present an application of our method to cancer genomics where it is of interest to compare cancer cells with a control sample from histologically normal tissues adjacent to the tumor. All the methods described in this article are implemented in the $\texttt{R}$ package $\texttt{pdglasso}$ availabe at: //github.com/savranciati/pdglasso.
Quantum devices should operate in adherence to quantum physics principles. Quantum random access memory (QRAM), a fundamental component of many essential quantum algorithms for tasks such as linear algebra, data search, and machine learning, is often proposed to offer $\mathcal{O}(\log N)$ circuit depth for $\mathcal{O}(N)$ data size, given $N$ qubits. However, this claim appears to breach the principle of relativity when dealing with a large number of qubits in quantum materials interacting locally. In our study we critically explore the intrinsic bounds of rapid quantum memories based on causality, employing the relativistic quantum field theory and Lieb-Robinson bounds in quantum many-body systems. In this paper, we consider a hardware-efficient QRAM design in hybrid quantum acoustic systems. Assuming clock cycle times of approximately $10^{-3}$ seconds and a lattice spacing of about 1 micrometer, we show that QRAM can accommodate up to $\mathcal{O}(10^7)$ logical qubits in 1 dimension, $\mathcal{O}(10^{15})$ to $\mathcal{O}(10^{20})$ in various 2D architectures, and $\mathcal{O}(10^{24})$ in 3 dimensions. We contend that this causality bound broadly applies to other quantum hardware systems. Our findings highlight the impact of fundamental quantum physics constraints on the long-term performance of quantum computing applications in data science and suggest potential quantum memory designs for performance enhancement.
This research article analyses and demonstrates the hidden implications for fairness of seemingly neutral data coupled with powerful technology, such as machine learning (ML), using Open Banking as an example. Open Banking has ignited a revolution in financial services, opening new opportunities for customer acquisition, management, retention, and risk assessment. However, the granularity of transaction data holds potential for harm where unnoticed proxies for sensitive and prohibited characteristics may lead to indirect discrimination. Against this backdrop, we investigate the dimensions of financial vulnerability (FV), a global concern resulting from COVID-19 and rising inflation. Specifically, we look to understand the behavioral elements leading up to FV and its impact on at-risk, disadvantaged groups through the lens of fair interpretation. Using a unique dataset from a UK FinTech lender, we demonstrate the power of fine-grained transaction data while simultaneously cautioning its safe usage. Three ML classifiers are compared in predicting the likelihood of FV, and groups exhibiting different magnitudes and forms of FV are identified via clustering to highlight the effects of feature combination. Our results indicate that engineered features of financial behavior can be predictive of omitted personal information, particularly sensitive or protected characteristics, shedding light on the hidden dangers of Open Banking data. We discuss the implications and conclude fairness via unawareness is ineffective in this new technological environment.
We propose a quantum soft-covering problem for a given general quantum channel and one of its output states, which consists in finding the minimum rank of an input state needed to approximate the given channel output. We then prove a one-shot quantum covering lemma in terms of smooth min-entropies by leveraging decoupling techniques from quantum Shannon theory. This covering result is shown to be equivalent to a coding theorem for rate distortion under a posterior (reverse) channel distortion criterion [Atif, Sohail, Pradhan, arXiv:2302.00625]. Both one-shot results directly yield corollaries about the i.i.d. asymptotics, in terms of the coherent information of the channel. The power of our quantum covering lemma is demonstrated by two additional applications: first, we formulate a quantum channel resolvability problem, and provide one-shot as well as asymptotic upper and lower bounds. Secondly, we provide new upper bounds on the unrestricted and simultaneous identification capacities of quantum channels, in particular separating for the first time the simultaneous identification capacity from the unrestricted one, proving a long-standing conjecture of the last author.
Simulating quantum systems is one of the most promising avenues to harness the computational power of quantum computers. However, hardware errors in noisy near-term devices remain a major obstacle for applications. Ideas based on the randomization of Suzuki-Trotter product formulas have been shown to be a powerful approach to reducing the errors of quantum simulation and lowering the gate count. In this paper, we study the performance of non-unitary simulation channels and consider the error structure of channels constructed from a weighted average of unitary circuits. We show that averaging over just a few simulation circuits can significantly reduce the Trotterization error for both single-step short-time and multi-step long-time simulations. We focus our analysis on two approaches for constructing circuit ensembles for averaging: (i) permuting the order of the terms in the Hamiltonian and (ii) applying a set of global symmetry transformations. We compare our analytical error bounds to empirical performance and show that empirical error reduction surpasses our analytical estimates in most cases. Finally, we test our method on an IonQ trapped-ion quantum computer accessed via the Amazon Braket cloud platform, and benchmark the performance of the averaging approach.
CATASTROAGRI is an application developed to load, analyze and interactively visualize relevant data on catastrophic agricultural insurance. It also focuses on the analysis of an ARIMA (0,1,1) (0,1,1) model to identify and estimate patterns in the agricultural data of the Puno Region, it presents a decreasing trend because there is a significant relationship between successive values of the time series, We can also state that it is not stationary because the mean and variance do not remain constant over time and the series has periods, and it is observed that the cases are decreasing and increasing over the years, especially the amount to indemnify due to the behavior of the climate in the highlands. The results of the analysis show that agricultural insurance plays an important role in protecting farmers against losses caused by adverse climatic events. The importance of concentrating resources and indemnities on the most affected crops and in the provinces with the highest agricultural production is emphasized. The results of the users' evaluation showed a high level of satisfaction, as well as ease of use.
The Graphical House Allocation (GHA) problem asks: how can $n$ houses (each with a fixed non-negative value) be assigned to the vertices of an undirected graph $G$, so as to minimize the sum of absolute differences along the edges of $G$? This problem generalizes the classical Minimum Linear Arrangement problem, as well as the well-known House Allocation Problem from Economics. Recent work has studied the computational aspects of GHA and observed that the problem is NP-hard and inapproximable even on particularly simple classes of graphs, such as vertex disjoint unions of paths. However, the dependence of any approximations on the structural properties of the underlying graph had not been studied. In this work, we give a nearly complete characterization of the approximability of GHA. We present algorithms to approximate the optimal envy on general graphs, trees, planar graphs, bounded-degree graphs, and bounded-degree planar graphs. For each of these graph classes, we then prove matching lower bounds, showing that in each case, no significant improvement can be attained unless P = NP. We also present general approximation ratios as a function of structural parameters of the underlying graph, such as treewidth; these match the tight upper bounds in general, and are significantly better approximations for many natural subclasses of graphs. Finally, we investigate the special case of bounded-degree trees in some detail. We first refute a conjecture by Hosseini et al. [2023] about the structural properties of exact optimal allocations on binary trees by means of a counterexample on a depth-$3$ complete binary tree. This refutation, together with our hardness results on trees, might suggest that approximating the optimal envy even on complete binary trees is infeasible. Nevertheless, we present a linear-time algorithm that attains a $3$-approximation on complete binary trees.
Digital computers implement computations using circuits, as do many naturally occurring systems (e.g., gene regulatory networks). The topology of any such circuit restricts which variables may be physically coupled during the operation of a circuit. We investigate how such restrictions on the physical coupling affects the thermodynamic costs of running the circuit. To do this we first calculate the minimal additional entropy production that arises when we run a given gate in a circuit. We then build on this calculation, to analyze how the thermodynamic costs of implementing a computation with a full circuit, comprising multiple connected gates, depends on the topology of that circuit. This analysis provides a rich new set of optimization problems that must be addressed by any designer of a circuit, if they wish to minimize thermodynamic costs.