A central challenge in the verification of quantum computers is benchmarking their performance as a whole and demonstrating their computational capabilities. In this work, we find a universal model of quantum computation, Bell sampling, that can be used for both of those tasks and thus provides an ideal stepping stone towards fault-tolerance. In Bell sampling, we measure two copies of a state prepared by a quantum circuit in the transversal Bell basis. We show that the Bell samples are classically intractable to produce and at the same time constitute what we call a circuit shadow: from the Bell samples we can efficiently extract information about the quantum circuit preparing the state, as well as diagnose circuit errors. In addition to known properties that can be efficiently extracted from Bell samples, we give two new and efficient protocols, a test for the depth of the circuit and an algorithm to estimate a lower bound to the number of T gates in the circuit. With some additional measurements, our algorithm learns a full description of states prepared by circuits with low T-count.
Quantum annealing (QA) holds promise for optimization problems in quantum computing, especially for combinatorial optimization. This analog framework attracts attention for its potential to address complex problems. Its gate-based homologous, QAOA with proven performance, has brought lots of attention to the NISQ era. Several numerical benchmarks try to classify these two metaheuristics however, classical computational power highly limits the performance insights. In this work, we introduce a new parametrized version of QA enabling a precise 1-local analysis of the algorithm. We develop a tight Lieb-Robinson bound for regular graphs, achieving the best-known numerical value to analyze QA locally. Studying MaxCut over cubic graph as a benchmark optimization problem, we show that a linear-schedule QA with a 1-local analysis achieves an approximation ratio over 0.7020, outperforming any known 1-local algorithms.
In many applications, it is of interest to identify a parsimonious set of features, or panel, from multiple candidates that achieves a desired level of performance in predicting a response. This task is often complicated in practice by missing data arising from the sampling design or other random mechanisms. Most recent work on variable selection in missing data contexts relies in some part on a finite-dimensional statistical model, e.g., a generalized or penalized linear model. In cases where this model is misspecified, the selected variables may not all be truly scientifically relevant and can result in panels with suboptimal classification performance. To address this limitation, we propose a nonparametric variable selection algorithm combined with multiple imputation to develop flexible panels in the presence of missing-at-random data. We outline strategies based on the proposed algorithm that achieve control of commonly used error rates. Through simulations, we show that our proposal has good operating characteristics and results in panels with higher classification and variable selection performance compared to several existing penalized regression approaches in cases where a generalized linear model is misspecified. Finally, we use the proposed method to develop biomarker panels for separating pancreatic cysts with differing malignancy potential in a setting where complicated missingness in the biomarkers arose due to limited specimen volumes.
Quantized tensor trains (QTTs) have recently emerged as a framework for the numerical discretization of continuous functions, with the potential for widespread applications in numerical analysis. However, the theory of QTT approximation is not fully understood. In this work, we advance this theory from the point of view of multiscale polynomial interpolation. This perspective clarifies why QTT ranks decay with increasing depth, quantitatively controls QTT rank in terms of smoothness of the target function, and explains why certain functions with sharp features and poor quantitative smoothness can still be well approximated by QTTs. The perspective also motivates new practical and efficient algorithms for the construction of QTTs from function evaluations on multiresolution grids.
Quantum computing has shown tremendous promise in addressing complex computational problems, yet its practical realization is hindered by the limited availability of qubits for computation. Recent advancements in quantum hardware have introduced mid-circuit measurements and resets, enabling the reuse of measured qubits and significantly reducing the qubit requirements for executing quantum algorithms. In this work, we present a systematic study of dynamic quantum circuit compilation, a process that transforms static quantum circuits into their dynamic equivalents with a reduced qubit count through qubit-reuse. We establish the first general framework for optimizing the dynamic circuit compilation via graph manipulation. In particular, we completely characterize the optimal quantum circuit compilation using binary integer programming, provide efficient algorithms for determining whether a given quantum circuit can be reduced to a smaller circuit and present heuristic algorithms for devising dynamic compilation schemes in general. Furthermore, we conduct a thorough analysis of quantum circuits with practical relevance, offering optimal compilations for well-known quantum algorithms in quantum computation, ansatz circuits utilized in quantum machine learning, and measurement-based quantum computation crucial for quantum networking. We also perform a comparative analysis against state-of-the-art approaches, demonstrating the superior performance of our methods in both structured and random quantum circuits. Our framework lays a rigorous foundation for comprehending dynamic quantum circuit compilation via qubit-reuse, bridging the gap between theoretical quantum algorithms and their physical implementation on quantum computers with limited resources.
We propose data thinning, an approach for splitting an observation into two or more independent parts that sum to the original observation, and that follow the same distribution as the original observation, up to a (known) scaling of a parameter. This very general proposal is applicable to any convolution-closed distribution, a class that includes the Gaussian, Poisson, negative binomial, gamma, and binomial distributions, among others. Data thinning has a number of applications to model selection, evaluation, and inference. For instance, cross-validation via data thinning provides an attractive alternative to the usual approach of cross-validation via sample splitting, especially in settings in which the latter is not applicable. In simulations and in an application to single-cell RNA-sequencing data, we show that data thinning can be used to validate the results of unsupervised learning approaches, such as k-means clustering and principal components analysis, for which traditional sample splitting is unattractive or unavailable.
The nonparametric estimators built by minimizing the mean squared relative error are gaining in popularity for their robustness in the presence of outliers in comparison to the Nadaraya Watson estimators. In this paper we build a relative error regression function estimator in the case of a functional explanatory variable and a left truncated and right censored scalar variable. The pointwise and uniform convergence of the estimator is proved and its performance is assessed by a numerical study in particularly the robustness which is highlighted using the influence function as a measure of robustness.
We propose a simple multivariate normality test based on Kac-Bernstein's characterization, which can be conducted by utilising existing statistical independence tests for sums and differences of data samples. We also perform its empirical investigation, which reveals that for high-dimensional data, the proposed approach may be more efficient than the alternative ones. The accompanying code repository is provided at \url{//shorturl.at/rtuy5}.
There exist a wide range of single number metrics for assessing performance of classification algorithms, including AUC and the F1-score (Wikipedia lists 17 such metrics, with 27 different names). In this article, I propose a new metric to answer the following question: when an algorithm is tuned so that it can no longer distinguish labelled cats from real cats, how often does a randomly chosen image that has been labelled as containing a cat actually contain a cat? The steps to construct this metric are as follows. First, we set a threshold score such that when the algorithm is shown two randomly-chosen images -- one that has a score greater than the threshold (i.e. a picture labelled as containing a cat) and another from those pictures that really does contain a cat -- the probability that the image with the highest score is the one chosen from the set of real cat images is 50\%. At this decision threshold, the set of positively labelled images are indistinguishable from the set of images which are positive. Then, as a second step, we measure performance by asking how often a randomly chosen picture from those labelled as containing a cat actually contains a cat. This metric can be thought of as {\it precision at the indistinguishability threshold}. While this new metric doesn't address the tradeoff between precision and recall inherent to all such metrics, I do show why this method avoids pitfalls that can occur when using, for example AUC, and it is better motivated than, for example, the F1-score.
We observe a large variety of robots in terms of their bodies, sensors, and actuators. Given the commonalities in the skill sets, teaching each skill to each different robot independently is inefficient and not scalable when the large variety in the robotic landscape is considered. If we can learn the correspondences between the sensorimotor spaces of different robots, we can expect a skill that is learned in one robot can be more directly and easily transferred to other robots. In this paper, we propose a method to learn correspondences among two or more robots that may have different morphologies. To be specific, besides robots with similar morphologies with different degrees of freedom, we show that a fixed-based manipulator robot with joint control and a differential drive mobile robot can be addressed within the proposed framework. To set up the correspondence among the robots considered, an initial base task is demonstrated to the robots to achieve the same goal. Then, a common latent representation is learned along with the individual robot policies for achieving the goal. After the initial learning stage, the observation of a new task execution by one robot becomes sufficient to generate a latent space representation pertaining to the other robots to achieve the same task. We verified our system in a set of experiments where the correspondence between robots is learned (1) when the robots need to follow the same paths to achieve the same task, (2) when the robots need to follow different trajectories to achieve the same task, and (3) when complexities of the required sensorimotor trajectories are different for the robots. We also provide a proof-of-the-concept realization of correspondence learning between a real manipulator robot and a simulated mobile robot.
The author has recently introduced an abstract algebraic framework of analogical proportions within the general setting of universal algebra. This paper studies analogical proportions in the boolean domain consisting of two elements 0 and 1 within his framework. It turns out that our notion of boolean proportions coincides with two prominent models from the literature in different settings. This means that we can capture two separate modellings of boolean proportions within a single framework which is mathematically appealing and provides further evidence for its robustness and applicability.