United Nation (UN) security council has fifteen members, out of which five permanent members of the council can use their veto power against any unfavorable decision taken by the council. In certain situation, a member using right to veto may prefer to remain anonymous. This need leads to the requirement of the protocols for anonymous veto which can be viewed as a special type of voting. Recently, a few protocols for quantum anonymous veto have been designed which clearly show quantum advantages in ensuring anonymity of the veto. However, none of the efficient protocols for quantum anonymous veto have yet been experimentally realized. Here, we implement 2 of those protocols for quantum anonymous veto using an IBM quantum computer named IBMQ Casablanca and different quantum resources like Bell, GHZ and cluster states. In this set of proof-of-principle experiments, it's observed that using the present technology, a protocol for quantum anonymous veto can be realized experimentally if the number of people who can veto remains small as in the case of UN council. Further, it's observed that Bell state based protocol implemented here performs better than the GHZ/cluster state based implementation of the other protocol in an ideal scenario as well as in presence of different types of noise (amplitude damping, phase damping, depolarizing and bit-flip noise). In addition, it's observed that based on diminishing impact on fidelity, different noise models studied here can be ordered in ascending order as phase damping, amplitude damping, depolarizing, bit-flip.
A core challenge for superconducting quantum computers is to scale up the number of qubits in each processor without increasing noise or cross-talk. Distributing a quantum computer across nearby small qubit arrays, known as chiplets, could solve many problems associated with size. We propose a chiplet architecture over microwave links with potential to exceed monolithic performance on near-term hardware. We model and evaluate the chiplet architecture in a way that bridges the physical and network layers. We find concrete evidence that distributed quantum computing may accelerate the path toward useful and ultimately scalable quantum computers. In the long-term, short-range networks may underlie quantum computers just as local area networks underlie classical datacenters and supercomputers today.
Message forwarding protocols are protocols in which a chain of agents handles transmission of a message. Each agent forwards the received message to the next agent in the chain. For example, TLS middleboxes act as intermediary agents in TLS, adding functionality such as filtering or compressing data. In such protocols, an attacker may attempt to bypass one or more intermediary agents. Such an agent-skipping attack can the violate security requirements of the protocol. Using the multiset rewriting model in the symbolic setting, we construct a comprehensive framework of such path protocols. In particular, we introduce a set of security goals related to path integrity: the notion that a message faithfully travels through participants in the order intended by the initiating agent. We perform a security analysis of several such protocols, highlighting key attacks on modern protocols.
ASBK (named after the authors' initials) is a recent blockchain protocol tackling data availability attacks against light nodes, employing two-dimensional Reed-Solomon codes to encode the list of transactions and a random sampling phase where adversaries are forced to reveal information. In its original formulation, only codes with rate $1/4$ are considered, and a theoretical analysis requiring computationally demanding formulas is provided. This makes ASBK difficult to optimize in situations of practical interest. In this paper, we introduce a much simpler model for such a protocol, which additionally supports the use of codes with arbitrary rate. This makes blockchains implementing ASBK much easier to design and optimize. Furthermore, disposing of a clearer view of the protocol, some general features and considerations can be derived (e.g., nodes behaviour in largely participated networks). As a concrete application of our analysis, we consider relevant blockchain parameters and find network settings that minimize the amount of data downloaded by light nodes. Our results show that the protocol benefits from the use of codes defined over large finite fields, with code rates that may be even significantly different from the originally proposed ones.
This paper describes OpenIPDM software for modelling the deterioration process of infrastructures using network-scale visual inspection data. In addition to the deterioration state estimates, OpenIPDM provides functions for quantifying the effect of interventions, estimating the service life of an intervention, and generating synthetic data for verification purposes. Each of the aforementioned functions are accessible by an interactive graphical user interface. OpenIPDM is designed based on the research work done on a network of bridges in Quebec province, so that the concepts presented in the software have been validated for applications in a real-world context. In addition, this software provides foundations for future developments in the subject area of modelling the deterioration as well as intervention planning.
Mechanics of materials is a classic course of engineering presenting the fundamentals of strain and stress analysis to junior undergraduate students in several engineering majors. So far, material deformation and strain have been only analyzed using theoretical and numerical approaches, and they have been experimentally validated by expensive machines and tools. This paper presents a novel approach for strain and deformation analysis by using quadcopters. We propose to treat quadcopters as finite number of particles of a deformable body and apply the principles of continuum mechanics to illustrate the concept of axial and shear deformation by using quadcopter hardware in a $3$-D motion space. The outcome of this work can have significant impact on undergraduate education by filling the gap between in-class learning and hardware realization and experiments, where we introduce new roles for drones as "teachers" providing a great opportunity for practicing theoretical concepts of mechanics in a fruitful and understandable way.
Bitcoin is the most secure blockchain in the world, supported by the immense hash power of its Proof-of-Work miners, but consumes huge amount of energy. Proof-of-Stake chains are energy-efficient, have fast finality and accountability, but face several fundamental security issues: susceptibility to non-slashable long-range safety attacks, non-slashable transaction censorship and stalling attacks and difficulty to bootstrap new PoS chains from low token valuation. We propose Babylon, a blockchain platform which combines the best of both worlds by reusing the immense Bitcoin hash power to enhance the security of PoS chains. Babylon provides a data-available timestamping service, securing PoS chains by allowing them to timestamp data-available block checkpoints, fraud proofs and censored transactions on Babylon. Babylon miners merge mine with Bitcoin and thus the platform has zero additional energy cost. The security of a Babylon-enhanced PoS protocol is formalized by a cryptoeconomic security theorem which shows slashable safety and liveness guarantees.
The study of immune cellular composition is of great scientific interest in immunology and multiple large-scale data have also been generated recently to support this investigation. From the statistical point of view, such immune cellular composition data corresponds to compositional data that conveys relative information. In compositional data, each element is positive and all the elements together sum to a constant, which can be set to one in general. Standard statistical methods are not directly applicable for the analysis of compositional data because they do not appropriately handle correlations among elements in the compositional data. As this type of data has become more widely available, investigation of optimal statistical strategies considering compositional features in data became more in great need. In this paper, we review statistical methods for compositional data analysis and illustrate them in the context of immunology. Specifically, we focus on regression analyses using log-ratio and Dirichlet approaches, discuss their theoretical foundations, and illustrate their applications with immune cellular fraction data generated from colorectal cancer patients.
We give a classical algorithm for linear regression analogous to the quantum matrix inversion algorithm [Harrow, Hassidim, and Lloyd, Physical Review Letters'09, arXiv:0811.3171] for low-rank matrices [Wossnig, Zhao, and Prakash, Physical Review Letters'18, arXiv:1704.06174], when the input matrix $A$ is stored in a data structure applicable for QRAM-based state preparation. Namely, suppose we are given an $A \in \mathbb{C}^{m\times n}$ with minimum non-zero singular value $\sigma$ which supports certain efficient $\ell_2$-norm importance sampling queries, along with a $b \in \mathbb{C}^m$. Then, for some $x \in \mathbb{C}^n$ satisfying $\|x - A^+b\| \leq \varepsilon\|A^+b\|$, we can output a measurement of $|x\rangle$ in the computational basis and output an entry of $x$ with classical algorithms that run in $\tilde{\mathcal{O}}\big(\frac{\|A\|_{\mathrm{F}}^6\|A\|^6}{\sigma^{12}\varepsilon^4}\big)$ and $\tilde{\mathcal{O}}\big(\frac{\|A\|_{\mathrm{F}}^6\|A\|^2}{\sigma^8\varepsilon^4}\big)$ time, respectively. This improves on previous "quantum-inspired" algorithms in this line of research by at least a factor of $\frac{\|A\|^{16}}{\sigma^{16}\varepsilon^2}$ [Chia, Gily\'en, Li, Lin, Tang and Wang, STOC'20, arXiv:1910.06151]. As a consequence, we show that quantum computers can achieve at most a factor-of-12 speedup for linear regression in this QRAM data structure setting and related settings. Our work applies techniques from sketching algorithms and optimization to the quantum-inspired literature. Unlike earlier works, this is a promising avenue that could lead to feasible implementations of classical regression in a quantum-inspired settings, for comparison against future quantum computers.
In recent years, emerging storage hardware technologies have focused on divergent goals: better performance or lower cost-per-bit. Correspondingly, data systems that employ these technologies are typically optimized either to be fast (but expensive) or cheap (but slow). We take a different approach: by architecting a storage engine to natively utilize two tiers of fast and low-cost storage technologies, we can achieve a Pareto-efficient balance between performance and cost-per-bit. This paper presents the design and implementation of PrismDB, a novel key-value store that exploits two extreme ends of the spectrum of modern NVMe storage technologies (3D XPoint and QLC NAND) simultaneously. Our key contribution is how to efficiently migrate and compact data between two different storage tiers. Inspired by the classic cost-benefit analysis of log cleaning, we develop a new algorithm for multi-tiered storage compaction that balances the benefit of reclaiming space for hot objects in fast storage with the cost of compaction I/O in slow storage. Compared to the standard use of RocksDB on flash in datacenters today, PrismDB's average throughput on tiered storage is 3.3$\times$ faster and its read tail latency is 2$\times$ better, using equivalently-priced hardware.
We prove a lower bound on the probability of Shor's order-finding algorithm successfully recovering the order $r$ in a single run. The bound implies that by performing two limited searches in the classical post-processing part of the algorithm, a high success probability can be guaranteed, for any $r$, without re-running the quantum part or increasing the exponent length compared to Shor. Asymptotically, in the limit as $r$ tends to infinity, the probability of successfully recovering $r$ in a single run tends to one. Already for moderate $r$, a high success probability exceeding e.g. $1 - 10^{-4}$ can be guaranteed. As corollaries, we prove analogous results for the probability of completely factoring any integer $N$ in a single run of the order-finding algorithm.