Quantum error correction (QEC) is essential for operating quantum computers in the presence of noise. Here, we accurately decode arbitrary Calderbank-Shor-Steane (CSS) codes via the maximum satisfiability (MaxSAT) problem. We show how to map quantum maximum likelihood problem of CSS codes of arbitrary geometry and parity check weight into MaxSAT problems. We incorporate the syndrome measurements as hard clauses, while qubit and measurement error probabilities, including biased and non-uniform, are encoded as soft MaxSAT clauses. For the code capacity of color codes on a hexagonal lattice, our decoder has a higher threshold and superior scaling in noise suppression compared to belief propagation with ordered statistics post-processing (BP-OSD), while showing similar scaling in computational cost. Further, we decode surface codes and recently proposed bivariate quantum low-density parity check (QLDPC) codes where we find lower error rates than BP-OSD. Finally, we connect the complexity of MaxSAT decoding to a computational phase transition controlled by the clause density of the MaxSAT problem, where we show that our mapping is always in the computationally ''easy`` phase. Our MaxSAT decoder can be further parallelised or implemented on ASICs and FPGAs, promising potential further speedups of several orders of magnitude. Our work provides a flexible platform towards practical applications on quantum computers.
In this paper, we establish the necessary and sufficient conditions for quasi-cyclic (QC) codes with index even to be symplectic self-orthogonal. Subsequently, we present the lower and upper bounds on the minimum symplectic distances of a class of $1$-generator QC codes and their symplectic dual codes by decomposing code spaces. As an application, we construct numerous new binary symplectic self-orthogonal QC codes with excellent parameters, leading to $117$ record-breaking quantum error-correction codes.
In this communication, we deal with the question of automatic target detection, recognition and tracking (ATD/R/T) algorithms performance assessment. We propose a complete methodology of evaluation which approaches objective image datasets development and adapted metrics definition for the different tasks (detection, recognition and tracking). We present some performance results which are currently processed in a French-MoD program called 2ACI (``Acquisition Automatique de Cibles par Imagerie``).
In Polyamorous Scheduling, we are given an edge-weighted graph and must find a periodic schedule of matchings in this graph which minimizes the maximal weighted waiting time between consecutive occurrences of the same edge. This NP-hard problem generalises Bamboo Garden Trimming and is motivated by the need to find schedules of pairwise meetings in a complex social group. We present two different analyses of an approximation algorithm based on the Reduce-Fastest heuristic, from which we obtain first a 6-approximation and then a 5.24-approximation for Polyamorous Scheduling. We also strengthen the extant proof that there is no polynomial-time $(1+\delta)$-approximation algorithm for the Optimisation Polyamorous Scheduling problem for any $\delta < \frac1{12}$ unless P = NP to the bipartite case. The decision version of Polyamorous Scheduling has a notion of density, similar to that of Pinwheel Scheduling, where problems with density below the threshold are guaranteed to admit a schedule (cf. the recently proven 5/6 conjecture, Kawamura, STOC 2024). We establish the existence of a similar threshold for Polyamorous Scheduling and give the first non-trivial bounds on the poly density threshold.
In this article, we introduce a notion of reducibility for partial functions on the natural numbers, which we call subTuring reducibility. One important aspect is that the subTuring degrees correspond to the structure of the realizability subtoposes of the effective topos. We show that the subTuring degrees (that is, the realizability subtoposes of the effective topos) form a dense non-modular (thus, non-distributive) lattice. We also show that there is a nonzero join-irreducible subTuring degree (which implies that there is a realizability subtopos of the effective topos that cannot be decomposed into two smaller realizability subtoposes).
Recently, large language models (LLMs) have achieved significant progress in automated code generation. Despite their strong instruction-following capabilities, these models frequently struggled to align with user intent in coding scenarios. In particular, they were hampered by datasets that lacked diversity and failed to address specialized tasks or edge cases. Furthermore, challenges in supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF) led to failures in generating precise, human-intent-aligned code. To tackle these challenges and improve the code generation performance for automated programming systems, we propose Feedback-driven Adaptive Long/short-term memory reinforced Coding Optimization (i.e., FALCON). FALCON is structured into two hierarchical levels. From the global level, long-term memory improves code quality by retaining and applying learned knowledge. At the local level, short-term memory allows for the incorporation of immediate feedback from compilers and AI systems. Additionally, we introduce meta-reinforcement learning with feedback rewards to solve the global-local bi-level optimization problem and enhance the model's adaptability across diverse code generation tasks. Extensive experiments demonstrate that our technique achieves state-of-the-art performance, leading other reinforcement learning methods by more than 4.5 percentage points on the MBPP benchmark and 6.1 percentage points on the Humaneval benchmark. The open-sourced code is publicly available at //github.com/titurte/FALCON.
We propose a deterministic method to find all holographic entropy inequalities and prove the completeness of our method. We use a triality between holographic entropy inequalities, contraction maps and partial cubes. More specifically, the validity of a holographic entropy inequality is implied by the existence of a contraction map, which we prove to be equivalent to finding an isometric embedding of a contracted graph. Thus, by virtue of the completeness of the contraction map proof method, the problem of finding all holographic entropy inequalities is equivalent to the problem of finding all contraction maps, which we translate to a problem of finding all image graph partial cubes. We give an algorithmic solution to this problem and characterize the complexity of our method. We also demonstrate interesting by-products, most notably, a procedure to generate candidate quantum entropy inequalities.
We consider the problem of sampling a high dimensional multimodal target probability measure. We assume that a good proposal kernel to move only a subset of the degrees of freedoms (also known as collective variables) is known a priori. This proposal kernel can for example be built using normalizing flows. We show how to extend the move from the collective variable space to the full space and how to implement an accept-reject step in order to get a reversible chain with respect to a target probability measure. The accept-reject step does not require to know the marginal of the original measure in the collective variable (namely to know the free energy). The obtained algorithm admits several variants, some of them being very close to methods which have been proposed previously in the literature. We show how the obtained acceptance ratio can be expressed in terms of the work which appears in the Jarzynski-Crooks equality, at least for some variants. Numerical illustrations demonstrate the efficiency of the approach on various simple test cases, and allow us to compare the variants of the algorithm.
In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.
Advances in artificial intelligence (AI) are fueling a new paradigm of discoveries in natural sciences. Today, AI has started to advance natural sciences by improving, accelerating, and enabling our understanding of natural phenomena at a wide range of spatial and temporal scales, giving rise to a new area of research known as AI for science (AI4Science). Being an emerging research paradigm, AI4Science is unique in that it is an enormous and highly interdisciplinary area. Thus, a unified and technical treatment of this field is needed yet challenging. This work aims to provide a technically thorough account of a subarea of AI4Science; namely, AI for quantum, atomistic, and continuum systems. These areas aim at understanding the physical world from the subatomic (wavefunctions and electron density), atomic (molecules, proteins, materials, and interactions), to macro (fluids, climate, and subsurface) scales and form an important subarea of AI4Science. A unique advantage of focusing on these areas is that they largely share a common set of challenges, thereby allowing a unified and foundational treatment. A key common challenge is how to capture physics first principles, especially symmetries, in natural systems by deep learning methods. We provide an in-depth yet intuitive account of techniques to achieve equivariance to symmetry transformations. We also discuss other common technical challenges, including explainability, out-of-distribution generalization, knowledge transfer with foundation and large language models, and uncertainty quantification. To facilitate learning and education, we provide categorized lists of resources that we found to be useful. We strive to be thorough and unified and hope this initial effort may trigger more community interests and efforts to further advance AI4Science.
Recently, ensemble has been applied to deep metric learning to yield state-of-the-art results. Deep metric learning aims to learn deep neural networks for feature embeddings, distances of which satisfy given constraint. In deep metric learning, ensemble takes average of distances learned by multiple learners. As one important aspect of ensemble, the learners should be diverse in their feature embeddings. To this end, we propose an attention-based ensemble, which uses multiple attention masks, so that each learner can attend to different parts of the object. We also propose a divergence loss, which encourages diversity among the learners. The proposed method is applied to the standard benchmarks of deep metric learning and experimental results show that it outperforms the state-of-the-art methods by a significant margin on image retrieval tasks.