We consider metrical task systems on general metric spaces with $n$ points, and show that any fully randomized algorithm can be turned into a randomized algorithm that uses only $2\log n$ random bits, and achieves the same competitive ratio up to a factor $2$. This provides the first order-optimal barely random algorithms for metrical task systems, i.e. which use a number of random bits that does not depend on the number of requests addressed to the system. We put forward an equivalent view that we call collective metrical task systems where $k$ agents in a metrical task system team up, and suffer the average cost paid by each agent. Our results imply that such team can be $O(\log n^2)$-competitive, as soon as $k\geq n^2$ (in comparison, a single agent is $\Omega(n)$-competitive at best). We discuss implications on various aspects of online decision making such as: distributed systems, transaction costs, and advice complexity, suggesting broad applicability.
In statistical mechanics, computing the partition function is generally difficult. An approximation method using a variational autoregressive network (VAN) has been proposed recently. This approach offers the advantage of directly calculating the generation probabilities while obtaining a significantly large number of samples. The present study introduces a novel approximation method that employs samples derived from quantum annealing machines in conjunction with VAN, which are empirically assumed to adhere to the Gibbs-Boltzmann distribution. When applied to the finite-size Sherrington-Kirkpatrick model, the proposed method demonstrates enhanced accuracy compared to the traditional VAN approach and other approximate methods, such as the widely utilized naive mean field.
It was recently conjectured that every component of a discrete rational dynamical system is a solution to an algebraic difference equation that is linear in its highest-shift term (a quasi-linear equation). Holonomic sequences are trivially seen as solutions to such dynamical systems. We prove that the conjecture holds for holonomic sequences and propose two algorithms for converting holonomic recurrence equations into such quasi-linear equations. The two algorithms differ in their efficiency and the minimality of orders in their outputs.
Multiple sequence alignment (MSA) is a fundamental algorithm in bioinformatics. In a situation when the alignment might need to be protected while revealing the other information such the input sequences and the alignment score, zero knowledge proof can be used. In this paper, a validator checks the consistency between the input sequence and the alignment, and between the alignment and the alignment score. The validator is written in Circom language which will be compile into a circuit. Using a zero knowledge prove system called zkSNARK, a cryptographic proof is generates for the circuit and its input. This proof demonstrates that all inputs are consistent without revealing the actual alignment.
We define QSE, a symbolic execution framework for quantum programs by integrating symbolic variables into quantum states and the outcomes of quantum measurements. The soundness of QSE is established through a theorem that ensures the correctness of symbolic execution within operational semantics. We further introduce symbolic stabilizer states, which symbolize the phases of stabilizer generators, for the efficient analysis of quantum error correction (QEC) programs. Within the QSE framework, we can use symbolic expressions to characterize the possible discrete Pauli errors in QEC, providing a significant improvement over existing methods that rely on sampling with simulators. We implement QSE with the support of symbolic stabilizer states in a prototype tool named QuantumSE.jl. Our experiments on representative QEC codes, including quantum repetition codes, Kitaev's toric codes, and quantum Tanner codes, demonstrate the efficiency of QuantumSE.jl for debugging QEC programs with over 1000 qubits. In addition, by substituting concrete values in symbolic expressions of measurement results, QuantumSE.jl is also equipped with a sampling feature for stabilizer circuits. Despite a longer initialization time than the state-of-the-art stabilizer simulator, Google's Stim, QuantumSE.jl offers a quicker sampling rate in the experiments.
Several cryptographic systems depend upon the computational difficulty of reversing cryptographic hash functions. Robust hash functions transform inputs to outputs in such a way that the inputs cannot be later retrieved in a reasonable amount of time even if the outputs and the function that created them are known. Consequently, hash functions can be cryptographically secure, and they are employed in encryption, authentication, and other security methods. It has been suggested that such cryptographically-secure hash functions will play a critical role in the era of post-quantum cryptography (PQC), as they do in conventional systems. In this work, we introduce a procedure that leverages the principle of reversibility to generate circuits that invert hash functions. We provide a proof-of-concept implementation and describe methods that allow for scaling the hash function inversion approach. Specifically, we implement one manifestation of the algorithm as part of a more general automated quantum circuit synthesis, compilation, and optimization toolkit. We illustrate production of reversible circuits for crypto-hash functions that inherently provide the inverse of the function, and we describe data structures that increase the scalability of the hash function inversion approach.
We investigate the unsupervised constituency parsing task, which organizes words and phrases of a sentence into a hierarchical structure without using linguistically annotated data. We observe that existing unsupervised parsers capture differing aspects of parsing structures, which can be leveraged to enhance unsupervised parsing performance. To this end, we propose a notion of "tree averaging," based on which we further propose a novel ensemble method for unsupervised parsing. To improve inference efficiency, we further distill the ensemble knowledge into a student model; such an ensemble-then-distill process is an effective approach to mitigate the over-smoothing problem existing in common multi-teacher distilling methods. Experiments show that our method surpasses all previous approaches, consistently demonstrating its effectiveness and robustness across various runs, with different ensemble components, and under domain-shift conditions.
We develop a novel approach for efficiently applying variational quantum linear solver (VQLS) in context of structured sparse matrices. Such matrices frequently arise during numerical solution of partial differential equations which are ubiquitous in science and engineering. Conventionally, Pauli basis is used for linear combination of unitary (LCU) decomposition of the underlying matrix to facilitate the evaluation the global/local VQLS cost functions. However, Pauli basis in worst case can result in number of LCU terms that scale quadratically with respect to the matrix size. We show that by using an alternate basis one can better exploit the sparsity and underlying structure of matrix leading to number of tensor product terms which scale only logarithmically with respect to the matrix size. Given this new basis is comprised of non-unitary operators, we employ the concept of unitary completion to design efficient quantum circuits for computing the global/local VQLS cost functions. We compare our approach with other related concepts in the literature including unitary dilation and measurement in Bell basis, and discuss its pros/cons while using VQLS applied to Heat equation as an example.
Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.
Behaviors of the synthetic characters in current military simulations are limited since they are generally generated by rule-based and reactive computational models with minimal intelligence. Such computational models cannot adapt to reflect the experience of the characters, resulting in brittle intelligence for even the most effective behavior models devised via costly and labor-intensive processes. Observation-based behavior model adaptation that leverages machine learning and the experience of synthetic entities in combination with appropriate prior knowledge can address the issues in the existing computational behavior models to create a better training experience in military training simulations. In this paper, we introduce a framework that aims to create autonomous synthetic characters that can perform coherent sequences of believable behavior while being aware of human trainees and their needs within a training simulation. This framework brings together three mutually complementary components. The first component is a Unity-based simulation environment - Rapid Integration and Development Environment (RIDE) - supporting One World Terrain (OWT) models and capable of running and supporting machine learning experiments. The second is Shiva, a novel multi-agent reinforcement and imitation learning framework that can interface with a variety of simulation environments, and that can additionally utilize a variety of learning algorithms. The final component is the Sigma Cognitive Architecture that will augment the behavior models with symbolic and probabilistic reasoning capabilities. We have successfully created proof-of-concept behavior models leveraging this framework on realistic terrain as an essential step towards bringing machine learning into military simulations.
Embedding entities and relations into a continuous multi-dimensional vector space have become the dominant method for knowledge graph embedding in representation learning. However, most existing models ignore to represent hierarchical knowledge, such as the similarities and dissimilarities of entities in one domain. We proposed to learn a Domain Representations over existing knowledge graph embedding models, such that entities that have similar attributes are organized into the same domain. Such hierarchical knowledge of domains can give further evidence in link prediction. Experimental results show that domain embeddings give a significant improvement over the most recent state-of-art baseline knowledge graph embedding models.