Free categorical constructions characterise quantum computing as the combination of two copies of a reversible classical model, glued by the complementarity equations of classical structures. This recipe effectively constructs a computationally universal quantum programming language from two copies of Pi, the internal language of rig groupoids. The construction consists of Hughes' arrows. Thus answer positively the question whether a computational effect exists that turns reversible classical computation into quantum computation: the quantum effect. Measurements can be added by layering a further effect on top. Our construction also enables some reasoning about quantum programs (with or without measurement) through a combination of classical reasoning and reasoning about complementarity.
The operation of open-cast lignite mines is a large intervention in nature, making the areas uninhabitable even after closing the mines without renaturation processes. Renaturation of these large areas requires a regional planning process which is tied to many conditions and restrictions, such as environmental protection laws. The related information is available only as unstructured text in a variety of documents. Associated temporal aspects and the geographical borders to these textual information have to be linked manually so far. This process is highly time-consuming, error-prone, and tedious. Therefore, the knowledge of experts is often used, but this does not necessarily include all the relevant information. In this paper, we present a system to support the experts in decision-making of urban planning, renaturation, and redevelopment projects. The system allows to plan new projects, while considering spatial and temporal restrictions extracted from text documents. With this, our presented system can also be used to verify compliance with certain legal regulations, such as nature conservation laws.
Complexity theory typically focuses on the difficulty of solving computational problems using classical inputs and outputs, even with a quantum computer. In the quantum world, it is natural to apply a different notion of complexity, namely the complexity of synthesizing quantum states. We investigate a state-synthesizing counterpart of the class NP, referred to as stateQMA, which is concerned with preparing certain quantum states through a polynomial-time quantum verifier with the aid of a single quantum message from an all-powerful but untrusted prover. This is a subclass of the class stateQIP recently introduced by Rosenthal and Yuen (ITCS 2022), which permits polynomially many interactions between the prover and the verifier. Our main result consists of error reduction of this class and its variants with an exponentially small gap or a bounded space, as well as how this class relates to other fundamental state synthesizing classes, i.e., states generated by uniform polynomial-time quantum circuits (stateBQP) and space-uniform polynomial-space quantum circuits (statePSPACE). Additionally, we demonstrate that stateQCMA is closed under perfect completeness. Our proof techniques are based on the quantum singular value transformation introduced by Gily\'en, Su, Low, and Wiebe (STOC 2019), and its adaption to achieve exponential precision with a bounded space.
The complexity class Quantum Statistical Zero-Knowledge ($\mathsf{QSZK}$) captures computational difficulties of quantum state testing with respect to the trace distance for efficiently preparable mixed states (Quantum State Distinguishability Problem, QSDP), as introduced by Watrous (FOCS 2002). However, this class faces the same parameter issue as its classical counterpart, because of error reduction for the QSDP (the polarization lemma), as demonstrated by Sahai and Vadhan (JACM, 2003). In this paper, we introduce quantum analogues of triangular discrimination, which is a symmetric version of the $\chi^2$ divergence, and investigate the quantum state testing problems for quantum triangular discrimination and quantum Jensen-Shannon divergence (a symmetric version of the quantum relative entropy). These new $\mathsf{QSZK}$-complete problems allow us to improve the parameter regime for testing quantum states in trace distance and examine the limitations of existing approaches to polarization. Additionally, we prove that the quantum state testing for trace distance with negligible errors is in $\mathsf{PP}$ while the same problem without error is in $\mathsf{BQP}_1$. This result suggests that achieving length-preserving polarization for QSDP seems implausible unless $\mathsf{QSZK}$ is in $\mathsf{PP}$.
We extend the free cornering of a symmetric monoidal category, a double categorical model of concurrent interaction, to support branching communication protocols and iterated communication protocols. We validate our constructions by showing that they inherit significant categorical structure from the free cornering, including that they form monoidal double categories. We also establish some elementary properties of the novel structure they contain. Further, we give a model of the free cornering in terms of strong functors and strong natural transformations, inspired by the literature on computational effects.
Time-series clustering serves as a powerful data mining technique for time-series data in the absence of prior knowledge about clusters. A large amount of time-series data with large size has been acquired and used in various research fields. Hence, clustering method with low computational cost is required. Given that a quantum-inspired computing technology, such as a simulated annealing machine, surpasses conventional computers in terms of fast and accurately solving combinatorial optimization problems, it holds promise for accomplishing clustering tasks that are challenging to achieve using existing methods. This study proposes a novel time-series clustering method that leverages an annealing machine. The proposed method facilitates an even classification of time-series data into clusters close to each other while maintaining robustness against outliers. Moreover, its applicability extends to time-series images. We compared the proposed method with a standard existing method for clustering an online distributed dataset. In the existing method, the distances between each data are calculated based on the Euclidean distance metric, and the clustering is performed using the k-means++ method. We found that both methods yielded comparable results. Furthermore, the proposed method was applied to a flow measurement image dataset containing noticeable noise with a signal-to-noise ratio of approximately 1. Despite a small signal variation of approximately 2%, the proposed method effectively classified the data without any overlap among the clusters. In contrast, the clustering results by the standard existing method and the conditional image sampling (CIS) method, a specialized technique for flow measurement data, displayed overlapping clusters. Consequently, the proposed method provides better results than the other two methods, demonstrating its potential as a superior clustering method.
Simulation models of critical systems often have parameters that need to be calibrated using observed data. For expensive simulation models, calibration is done using an emulator of the simulation model built on simulation output at different parameter settings. Using intelligent and adaptive selection of parameters to build the emulator can drastically improve the efficiency of the calibration process. The article proposes a sequential framework with a novel criterion for parameter selection that targets learning the posterior density of the parameters. The emergent behavior from this criterion is that exploration happens by selecting parameters in uncertain posterior regions while simultaneously exploitation happens by selecting parameters in regions of high posterior density. The advantages of the proposed method are illustrated using several simulation experiments and a nuclear physics reaction model.
Arrays of quantum dots (QDs) are a promising candidate system to realize scalable, coupled qubit systems and serve as a fundamental building block for quantum computers. In such semiconductor quantum systems, devices now have tens of individual electrostatic and dynamical voltages that must be carefully set to localize the system into the single-electron regime and to realize good qubit operational performance. The mapping of requisite QD locations and charges to gate voltages presents a challenging classical control problem. With an increasing number of QD qubits, the relevant parameter space grows sufficiently to make heuristic control unfeasible. In recent years, there has been considerable effort to automate device control that combines script-based algorithms with machine learning (ML) techniques. In this Colloquium, a comprehensive overview of the recent progress in the automation of QD device control is presented, with a particular emphasis on silicon- and GaAs-based QDs formed in two-dimensional electron gases. Combining physics-based modeling with modern numerical optimization and ML has proven effective in yielding efficient, scalable control. Further integration of theoretical, computational, and experimental efforts with computer science and ML holds vast potential in advancing semiconductor and other platforms for quantum computing.
Recent studies have discovered that Chain-of-Thought prompting (CoT) can dramatically improve the performance of Large Language Models (LLMs), particularly when dealing with complex tasks involving mathematics or reasoning. Despite the enormous empirical success, the underlying mechanisms behind CoT and how it unlocks the potential of LLMs remain elusive. In this paper, we take a first step towards theoretically answering these questions. Specifically, we examine the capacity of LLMs with CoT in solving fundamental mathematical and decision-making problems. We start by giving an impossibility result showing that any bounded-depth Transformer cannot directly output correct answers for basic arithmetic/equation tasks unless the model size grows super-polynomially with respect to the input length. In contrast, we then prove by construction that autoregressive Transformers of a constant size suffice to solve both tasks by generating CoT derivations using a commonly-used math language format. Moreover, we show LLMs with CoT are capable of solving a general class of decision-making problems known as Dynamic Programming, thus justifying its power in tackling complex real-world tasks. Finally, extensive experiments on four tasks show that, while Transformers always fail to predict the answers directly, they can consistently learn to generate correct solutions step-by-step given sufficient CoT demonstrations.
Manifolds discovered by machine learning models provide a compact representation of the underlying data. Geodesics on these manifolds define locally length-minimising curves and provide a notion of distance, which are key for reduced-order modelling, statistical inference, and interpolation. In this work, we first analyse existing methods for computing length-minimising geodesics. We find that these are not suitable for obtaining valid paths, and thus, geodesic distances. We remedy these shortcomings by leveraging numerical tools from differential geometry, which provide the means to obtain Hamiltonian-conserving geodesics. Second, we propose a model-based parameterisation for distance fields and geodesic flows on continuous manifolds. Our approach exploits a manifold-aware extension to the Eikonal equation, eliminating the need for approximations or discretisation. Finally, we develop a curvature-based training mechanism, sampling and scaling points in regions of the manifold exhibiting larger values of the Ricci scalar. This sampling and scaling approach ensures that we capture regions of the manifold subject to higher degrees of geodesic deviation. Our proposed methods provide principled means to compute valid geodesics and geodesic distances on manifolds. This work opens opportunities for latent-space interpolation, optimal control, and distance computation on differentiable manifolds.
Recently, with the chain of thought (CoT) prompting, large language models (LLMs), e.g., GPT-3, have shown strong reasoning ability in several natural language processing tasks such as arithmetic, commonsense, and logical reasoning. However, LLMs with CoT require multi-step prompting and multi-token prediction, which is highly sensitive to individual mistakes and vulnerable to error accumulation. The above issues make the LLMs need the ability to verify the answers. In fact, after inferring conclusions in some thinking decision tasks, people often check them by re-verifying steps to avoid some mistakes. In this paper, we propose and prove that LLMs also have similar self-verification abilities. We take the conclusion obtained by CoT as one of the conditions for solving the original problem. By taking turns masking the original conditions and predicting their results, we calculate an explainable answer verification score based on whether the re-predicted conditions are correct. Experimental results demonstrate that the proposed method can improve the reasoning performance on various arithmetic, commonsense, and logical reasoning datasets. Our code is publicly available at: //github.com/WENGSYX/Self-Verification.