When analysing Quantum Key Distribution (QKD) protocols several metrics can be determined, but one of the most important is the Secret Key Rate. The Secret Key Rate is the number of bits per transmission that result in being part of a Secret Key between two parties. There are equations that give the Secret Key Rate, for example, for the BB84 protocol, equation 52 from [1, p.1032] gives the Secret Key Rate for a given Quantum Bit Error Rate (QBER). However, the analysis leading to equations such as these often rely on an Asymptotic approach, where it is assumed that an infinite number of transmissions are sent between the two communicating parties (henceforth denoted as Alice and Bob). In a practical implementation this is obviously impossible. Moreover, some QKD protocols belong to a category called Asymmetric protocols, for which it is significantly more difficult to perform such an analysis. As such, there is currently a lot of investigation into a different approach called the Finite-key regime. Work by Bunandar et al. [2] has produced code that used Semi-Definite Programming to produce lower bounds on the Secret Key Rate of even Asymmetric protocols. Our work looks at devising a novel QKD protocol taking inspiration from both the 3-state version of BB84 [3], and the Twin-Field protocol [4], and then using this code to perform analysis of the new protocol.
Inferring brain connectivity and structure \textit{in-vivo} requires accurate estimation of the orientation distribution function (ODF), which encodes key local tissue properties. However, estimating the ODF from diffusion MRI (dMRI) signals is a challenging inverse problem due to obstacles such as significant noise, high-dimensional parameter spaces, and sparse angular measurements. In this paper, we address these challenges by proposing a novel deep-learning based methodology for continuous estimation and uncertainty quantification of the spatially varying ODF field. We use a neural field (NF) to parameterize a random series representation of the latent ODFs, implicitly modeling the often ignored but valuable spatial correlation structures in the data, and thereby improving efficiency in sparse and noisy regimes. An analytic approximation to the posterior predictive distribution is derived which can be used to quantify the uncertainty in the ODF estimate at any spatial location, avoiding the need for expensive resampling-based approaches that are typically employed for this purpose. We present empirical evaluations on both synthetic and real in-vivo diffusion data, demonstrating the advantages of our method over existing approaches.
The topology of the Internet and its geographic properties received significant attention during the last years, not only because they have a deep impact on the performance experienced by users, but also because of legal, political, and economic reasons. In this paper, the global Internet is studied in terms of path locality, where a path is defined as local if it does not cross the borders of the region where the source and destination hosts are located. The phenomenon is studied from the points of view of two metrics, one based on the size of the address space of the autonomous systems where the endpoints are located and the other one on the amount of served population. Results show that the regions of the world are characterized by significant differences in terms of path locality. The main elements contributing to the path locality, and non-locality, of the regions and countries, are identified and discussed. Finally, we present the most significant dependency relationships between countries caused by non-local paths.
We study the bias of Stochastic Gradient Descent (SGD) to learn low-rank weight matrices when training deep ReLU neural networks. Our results show that training neural networks with mini-batch SGD and weight decay causes a bias towards rank minimization over the weight matrices. Specifically, we show, both theoretically and empirically, that this bias is more pronounced when using smaller batch sizes, higher learning rates, or increased weight decay. Additionally, we predict and observe empirically that weight decay is necessary to achieve this bias. In addition, we show that in the presence of intermediate neural collapse, the learned weights are particularly low-rank. Unlike previous literature, our analysis does not rely on assumptions about the data, convergence, or optimality of the weight matrices. Furthermore, it applies to a wide range of neural network architectures of any width or depth. Finally, we empirically investigate the connection between this bias and generalization, finding that it has a marginal effect on generalization.
We initiate the study of repeated game dynamics in the population model, in which we are given a population of $n$ nodes, each with its local strategy, which interact uniformly at random by playing multi-round, two-player games. After each game, the two participants receive rewards according to a given payoff matrix, and may update their local strategies depending on this outcome. In this setting, we ask how the distribution of player strategies evolves with respect to the number of node interactions (time complexity), as well as the number of possible player states (space complexity), determining the stationary properties of such game dynamics. Our main technical results analyze the behavior of a family of Repeated Prisoner's Dilemma dynamics in this model, for which we provide an exact characterization of the stationary distribution, and give bounds on convergence time and on the optimality gap of its expected rewards. Our results follow from a new connection between Repeated Prisoner's Dilemma dynamics in a population, and a class of high-dimensional, weighted Ehrenfest random walks, which we analyze for the first time. The results highlight non-trivial trade-offs between the state complexity of each node's strategy, the convergence of the process, and the expected average reward of nodes in the population. Our approach opens the door towards the characterization of other natural evolutionary game dynamics in the population model.
The introduction of the European Union's (EU) set of comprehensive regulations relating to technology, the General Data Protection Regulation, grants EU citizens the right to explanations for automated decisions that have significant effects on their life. This poses a substantial challenge, as many of today's state-of-the-art algorithms are generally unexplainable black boxes. Simultaneously, we have seen an emergence of the fields of quantum computation and quantum AI. Due to the fickle nature of quantum information, the problem of explainability is amplified, as measuring a quantum system destroys the information. As a result, there is a need for post-hoc explanations for quantum AI algorithms. In the classical context, the cooperative game theory concept of the Shapley value has been adapted for post-hoc explanations. However, this approach does not translate to use in quantum computing trivially and can be exponentially difficult to implement if not handled with care. We propose a novel algorithm which reduces the problem of accurately estimating the Shapley values of a quantum algorithm into a far simpler problem of estimating the true average of a binomial distribution in polynomial time.
We introduce a new quantum algorithm for computing the Betti numbers of a simplicial complex. In contrast to previous quantum algorithms that work by estimating the eigenvalues of the combinatorial Laplacian, our algorithm is an instance of the generic Incremental Algorithm for computing Betti numbers that incrementally adds simplices to the simplicial complex and tests whether or not they create a cycle. In contrast to existing quantum algorithms for computing Betti numbers that work best when the complex has close to the maximal number of simplices, our algorithm works best for sparse complexes. To test whether a simplex creates a cycle, we introduce a quantum span-program algorithm. We show that the query complexity of our span program is parameterized by quantities called the effective resistance and effective capacitance of the boundary of the simplex. Unfortunately, we also prove upper and lower bounds on the effective resistance and capacitance, showing both quantities can be exponentially large with respect to the size of the complex, implying that our algorithm would have to run for exponential time to exactly compute Betti numbers. However, as a corollary to these bounds, we show that the spectral gap of the combinatorial Laplacian can be exponentially small. As the runtime of all previous quantum algorithms for computing Betti numbers are parameterized by the inverse of the spectral gap, our bounds show that all quantum algorithms for computing Betti numbers must run for exponentially long to exactly compute Betti numbers. Finally, we prove some novel formulas for effective resistance and effective capacitance to give intuition for these quantities.
Off-policy evaluation (OPE) aims to estimate the benefit of following a counterfactual sequence of actions, given data collected from executed sequences. However, existing OPE estimators often exhibit high bias and high variance in problems involving large, combinatorial action spaces. We investigate how to mitigate this issue using factored action spaces i.e. expressing each action as a combination of independent sub-actions from smaller action spaces. This approach facilitates a finer-grained analysis of how actions differ in their effects. In this work, we propose a new family of "decomposed" importance sampling (IS) estimators based on factored action spaces. Given certain assumptions on the underlying problem structure, we prove that the decomposed IS estimators have less variance than their original non-decomposed versions, while preserving the property of zero bias. Through simulations, we empirically verify our theoretical results, probing the validity of various assumptions. Provided with a technique that can derive the action space factorisation for a given problem, our work shows that OPE can be improved "for free" by utilising this inherent problem structure.
Audio has become an increasingly crucial biometric modality due to its ability to provide an intuitive way for humans to interact with machines. It is currently being used for a range of applications, including person authentication to banking to virtual assistants. Research has shown that these systems are also susceptible to spoofing and attacks. Therefore, protecting audio processing systems against fraudulent activities, such as identity theft, financial fraud, and spreading misinformation, is of paramount importance. This paper reviews the current state-of-the-art techniques for detecting audio spoofing and discusses the current challenges along with open research problems. The paper further highlights the importance of considering the ethical and privacy implications of audio spoofing detection systems. Lastly, the work aims to accentuate the need for building more robust and generalizable methods, the integration of automatic speaker verification and countermeasure systems, and better evaluation protocols.
Quantum computing is emerging as an unprecedented threat to the current state of widely used cryptographic systems. Cryptographic methods that have been considered secure for decades will likely be broken, with enormous impact on the security of sensitive data and communications in enterprises worldwide. A plan to migrate to quantum-resistant cryptographic systems is required. However, migrating an enterprise system to ensure a quantum-safe state is a complex process. Enterprises will require systematic guidance to perform this migration to remain resilient in a post-quantum era, as many organisations do not have staff with the expertise to manage this process unaided. This paper presents a comprehensive framework designed to aid enterprises in their migration. The framework articulates key steps and technical considerations in the cryptographic migration process. It makes use of existing organisational inventories and provides a roadmap for prioritising the replacement of cryptosystems in a post-quantum context. The framework enables the efficient identification of cryptographic objects, and can be integrated with other frameworks in enterprise settings to minimise operational disruption during migration. Practical case studies are included to demonstrate the utility and efficacy of the proposed framework using graph theoretic techniques to determine and evaluate cryptographic dependencies.
Shared Mobility Services (SMS), e.g., Demand-Responsive Transit (DRT) or ride-sharing, can improve mobility in low-density areas, often poorly served by conventional Public Transport (PT). Such improvement is mostly quantified via basic performance indicators, like wait or travel time. However, accessibility indicators, measuring the ease of reaching surrounding opportunities (e.g., jobs, schools, shops, ...), would be a more comprehensive indicator. To date, no method exists to quantify the accessibility of SMS based on empirical measurements. Indeed, accessibility is generally computed on graph representations of PT networks, but SMS are dynamic and do not follow a predefined network. We propose a spatial-temporal statistical method that takes as input observed trips of a SMS acting as a feeder for PT and summarized such trips in a graph. On such a graph, we compute classic accessibility indicators. We apply our method to a MATSim simulation study concerning DRT in Paris-Saclay.